@HPC Podcast Archives - OrionX.net - @HPCpodcast-104: Silicon Photonics, w Keren Bergman (2)
Episode Date: October 13, 2025We had the opportunity to catch up again with Professors Keren Bergman to discuss the latest developments in all things optical: co-packaged photonics/opto-eletcronics/silicon photonics, photonic int...egrated circuits (PICs), and optical computing. [audio mp3="https://orionx.net/wp-content/uploads/2025/10/104@HPCpodcast_SP_Keren-Bergman_Silicon-Photonics_20251012.mp3"][/audio] The post @HPCpodcast-104: Silicon Photonics, w Keren Bergman (2) appeared first on OrionX.net.
Transcript
Discussion (0)
For over two decades, we've partnered with the world's leading processor makers to solve the toughest thermal challenges.
From the largest AI clusters to the top supercomputers, cool IT cold plates set the standard for liquid cooling, combining unmatched reliability and performance.
See them in action at OCP Global 2025 and learn more at cool IT systems.com.
Especially this year, we're seeing the real leaders in HBC, the real leaders in AI systems and hardware.
Almost every single one of them has been putting stakes in the ground and saying, okay, we're going to do this now.
There's a really good example of photonics in the Scala, which is the Huawei system.
Early predictions, I would say we're talking about at least two orders of magnitude.
of magnitude to orders of magnitude in performance.
From Orion X in association with InsideHPC, this is the at-HPC podcast.
Join Shaheen Khan and Doug Black as they discuss supercomputing technologies and the applications,
markets, and policies that shape them. Thank you for being with us.
Hi, everyone. Welcome to the At-HPC podcast. I'm Doug Black of InsideHPC, and with me is my co-host,
Shaheen Khan of OrionX.net, and we're very happy to have with us today a special guest,
Karen Bergman, a noted expert in the field of optical I.O. Interconnect technology.
Karen is the Charles Bachelor, Professor of Electrical Engineering at Columbia, where she also serves
as the faculty director of the Columbia Nano Initiative. She also was a co-founder starting three
years ago of Escape Photonics, which is in the optical I.O. Arena. And she appears regularly at
technology events, including the Supercomputing Conference to discuss optical I.O. So, Karen,
welcome back. Thank you so much. It was a great fun to have the previous chat, and I'm really
looking forward to today. Great. Such a pleasure to have you, Karen. Our last episode was one of
our more popular episodes. I know there's a lot of interest in the market and great time for us to
catch up. We were all reminded that it was two and a half years ago, just like that.
It's amazing how quickly things move forward. Even two and a half years ago, a lot has
accelerated since then. Yeah, you came on with us in April of 2023, which generated a lot of
interest, and there's a lot of ongoing interest in optical I.O. as a potentially new foundational
technology that will allow HPC AI class servers to run faster and cooler. So why don't we just
start with a very big picture perspective, Karen, what's changed with the technology over the past
two plus years? I would say, you know, the most dominant changes have been really the maturation
of the ecosystem. So just two, two and a half years ago, obviously, you know, already a lot of
interest in this technology with silicon photonics, primarily being the forefront technology
for photonic interconnects and the possibility of using them in the context of AI data center
systems, HPC systems, as opposed to optics, of course, has been used for decades in long-haul
fiber optic communications and even in data centers, you know, in the longer reach connections.
So this is not what we're talking about here.
this is really getting photonics in the data path,
and what I like to call the data path.
So over the last couple of years,
there's been continuing, I would say, explosion of both, you know,
small and large companies that have made strategic
and really put stakes in the ground for bringing these to the commercial world.
The most, I would say, in my view,
the most pronounced announcement came from,
NVIDIA back in March of this year, that they were essentially made the announcement that they are
going to be using co-packaged optics in their systems initially to their switches.
Okay. And just to clarify exactly what we're talking about, I sort of needed this clarification.
We're not talking about moving data within the chip, but it's chip to chip within the server,
within motherboards, et cetera. Is that correct?
Exactly. Within the chip, over very small distances, you know, let's say sort of less than a millimeter or millimeters, copper is fantastic. Copper can do very high density. Copper can also deliver very high bandwidth density, I mean. Copper can also deliver very low energy consumption per bit in the single to tens of femtor joules per bit. And it's cost-effective. It can be used in 3D integration, and there's a huge manufacturing ecosystem around that.
Really, when we talk about co-packaged optics, or sometimes I refer to it as embedded photonics,
it's really about bringing the photonic interface to the chip and then using the advantages of the optical domain,
which basically are you can send a lot of data over any distance you want.
Whereas in electronics, of course, the longer the distance over which we send data,
you know, we experience a lot of loss and we have to amplify the signal and spend more energy
as a function of the frequency. In the optical domain, the very simple fact is that the losses of the
optical medium, whether it's an optical wave guide or whether it's a fiber optic cable are extremely
low. And so we can, it's the same signal of whether we send it over a few centimeters, a few
meters or maybe even up to a few hundreds of meters across the entire HBC system across the
entire data center. So that's really one of the key advantages. And so when we're talking about
bringing data movement and the optical domain to these HBC AI systems, the technology that we're
referring to is the interface. Let's put the photonic transceivers somewhere on the chip, whether it's
that, you know, in the form of a photonic I.O.
chiplet or some other co-packaged designs, formulations.
You know, all those things are obviously up for discussions.
And there will be different photonics in different places in the system as well.
But that's essentially what we're talking about.
It's transitioning to the optical domain at that distance point.
And so imagine having photonics, you know, within the blade, within the rack.
and also, of course, as we already have today, between racks and across the system.
And Karen, just quickly, I think the first thing on everyone's mind listening to our discussion is, you know,
this notion that optical I.O. is potentially such an exciting technology, but it always seems,
it seems to have a receding horizon. We're always two or three years away. What's your take,
and we can get into the particulars, but what's your take on the commercial readiness for this technology?
It's a great question. And you're right. I've been obviously, you know, working in this field for decades. And we are always excited when we do research and we like to think about the insertion of those technologies in commercial systems. And you're correct that in the past, you know, you could say it was the technology of the future and always will be. But this is different. This is really different. And, you know, being still cautious. There are a few reasons that photonics hasn't seen the full.
loan, you know, implementation. The number one is cost, as it always is, right? Cost is number
one. And, you know, for example, in the latest invidia system, the blackwell still it's
made out of copper, right? The interconnects are still copper. And that primarily the reason is cost,
not performance. Photonics, it's already known that photonics can outperform the equivalent
electrical interconnects, both in the bandwidth density and in offering lower energy
consumption. But for various market reasons, it's the cost. So why is the cost? The cost is also
obviously a factor of the ecosystem, the commercialization, the manufacturing ecosystem.
And that's, if I go back to the first comment I made, you know, what's been changing in the last
two and a half years or so since our last conversation, it's really the maturation, the further
maturation of this manufacturing ecosystem. Are we all the way there yet? Of course not. But
because what we're seeing, and especially this year, we're seeing the real leaders in HBC,
the real leaders in AI systems and hardware, almost every single one of them has been putting
stakes in the ground and saying, okay, we're going to do this now. And that means that they have
the suppliers, the fabrication, the manufacturers, the packaging, you know, with real
funding, with real orders, with real markets behind that. So I can also,
I'll almost anticipate the next question is like, okay, when, right? When do you think we'll see it? And I would say that at this point, we're really looking at 2028 as being the year. That's a little bit my opinion and various, you know, obviously various talks and conversations and the status of putting together everything that I know of what's out there. I think this is about 2028 or so will be the first real deployment of co-packaged photonics in systems.
We can come back in three years and check on that.
I'm hoping that it will happen even sooner, and of course we'll start to see something sooner.
But I think that will, in my opinion, I think that will be when we can actually say, look, here it is.
Karen, has the science been settled for the products that are being developed, or are they still being pursued in various capacities?
Is there like one direction that the industry and the manufacturing world is pursuing, or are new ideas coming?
in from various directions?
There's always new ideas, of course, but I would say that they are really about the next
generations. In terms of how things will be done, there's still quite a bit, there are multiple
approaches that are out there, and they will have to be shaken out in the marketplace with
vendors. And, you know, there's, of course, lots of questions about scalability, reliability,
you know, all the usual stuff that needs to be proven out for mass scale, manufacturing and
adoption. So those debates are very much ongoing. There's, you know, various technologies that are
being, quote unquote, baked off at this point. But I think that if you step away and look at the
picture, definitely we're going to be using some form of wavelength division multiplexing.
How many channels is probably going to be a small number of channels initially, and then the
growing number of channels as we continue to scale into the future. The co-package, the assembly of the
photonics together with the, whether it's the compute or the switch or the memory, those
approaches of exactly how things are going to be packaged, assembled. There are multitude of
technologies and approaches that are being pursued right now by various players. And we will
see, maybe many of them, several of them will succeed because it's not a one thing. I just
want to make that clear. Co-package photonics and systems is not a one thing.
There will be photonics that is co-packaged, perhaps, with the switch,
with the switch inside of the interconnect network.
There will be other photonics that's used for longer,
potentially for longer distances in the architecture and so forth.
So I think I expect that there will be different solutions for different points
in the data center for sure and within the system as well.
What is your perspective on the software protocols that run on these interoperations?
connects. And is that any different? Because the fundamental technology is different. It will have to be
the same. I think for a technology to succeed, if we need, there's already a pretty tall barrier to
entry for photonics, as we discussed earlier, for the cost, right? If on top of that, I ask the
system providers to revamp their software, I think that puts us, you know, many steps backwards.
I don't think that will be a successful path, maybe sometime in the future, but at this point, it's really about inserting this technology in the most seamless way possible.
One of the big debates around co-package, co-integrated, or photonics is the replaceability.
Today, in the systems, we have, of course, the well-known plug-able photonics, right?
Very easy, very useful vendor interoperability.
Everybody can design to it.
you can, something breaks, you replace it, right?
It's as simple, plugable as it's called.
And many applications, of course, longer reach, shorter reach, you know, all of the above.
And those things are not going to go away.
Of course, they're going to be part of the future systems as well.
But now we're talking about putting optics more inside, more inside the real data path of the applications
and really gaining the benefits of the integration from the point of view of the bandwidth
density. The plugables don't even come close to the bandwidth density that we get from
copackage optics. It's like maybe two orders of magnitude difference. That's how big it is. And of
course, the low energy consumption of the co-integrated optics. But it's co-packaged, co-integrated.
So it's a module. So if some part of it breaks, the laser breaks or this fails, then what do we
do? Do we need to replace the whole module? Do we, you know, what's the strategy? So those are
some of the questions. I think they're all solvable, 100% solvable issues. But those are really
where the technology questions are. The software is, if someone has an approach that requires
a redo of the software stack, it will not succeed in the near term for sure. Well, you know,
my motivation to ask was what we saw with rotating disks and solid state disks. And
initially it was all like slide in and you can't tell the difference. But then once it got
gelled, people said, but this is not rotating. This does different wear and tear attributes. I can
optimize it this way or that way. And then that over time sort of permeated through the software
stack because the awareness that the medium was different had an impact. So for something like
an interconnect, I'm not sure the same analogy holds, but that was any way the motivation.
Is there any difference that might provide opportunities for optimization that will prove
irresistible at some point?
100%. Absolutely. And that will come when, so the first step is about the interconnects.
So, for example, putting a photonic I.O. on the socket that might include, you know,
GPUs, memory, and so forth. Or as NVIDIA announced, a phatonic I.O. on the switch,
on a switch module.
But the next step, and this is still more in the research domain,
is to also include photonic switching in the system.
Now, I'm not talking about, of course, you know,
Google, for example, has optical circuit switches in their systems
as a way of doing some for reliability as well as topology, engineering,
things like that, more at kind of the macro level.
But I'm talking about botanic switches that would be, again,
more in the data path and could enable, imagine that we have this very large HPC AI system,
you know, with, I don't know, 10,000, even 100,000 endpoints. If we keep the data in the optical
domain, we can use optical switches to reach much further, sort of increase the diameter of the
compute that's available to us. We actually have a paper on this at SC this year,
so I'm very excited about that, using optical switches in these systems. When we get
get more mature with the interconnect, and we're starting to look at photonic switches as well,
absolutely, we will need to think about the full hardware software stack.
And that's the subject of research that I'm doing right now as well as, of course,
other people are pursuing.
But it will certainly come to commercial at some point in the future.
So, Karen, I was really impressed by your statement that you said, right now it's not performance,
it's price.
And I assume part of that means manufacturing price.
production at scale. These are issues that really need to be ironed out. But are you saying,
for example, Nvidia's next GPU could use this technology as things stand now?
Yes, absolutely. Absolutely. I mean, where exactly is the, where you put the optical interface is
a design question. For example, the Nvidia larger GPU, I guess it's the B200. Maybe they're
coming up with a new one, of course. You know, the connectivity, you know, inside.
of that, I believe, is still going to be electronic because it's very advantageous to have
an electronic. But at the socket level, where you have the GPUs and the HBMs, typically,
to include an optical I.O. Photonic I.O., what that will enable you to do is really scale
out that. So, as we know in these data center systems, right, we talk about scale out and
scale up. So scale out is when you string together, you know, these thousands or tens of thousands
of servers. But scale up is really in, you know, your compute, you're closely coupled, interconnected,
high performance compute. And so for example, we go back to the Nvidia system because
obviously, you know, they're the dominant in the market. So like the NVL 72, right, that's a scale
up. That's an example of what I would consider a scale up.
it's all copper at this point. If you went to Photonics, you take the same, even the same
GPUs and you go to Photonics, it's mind-boggling. I mean, we're talking, first of all,
you can expand the domain. You're not limited to 72. You can imagine potentially thousands.
And with equivalent or better, much better than with densities, at the same, you know,
you want very importantly, it's about keeping the power envelope from growing.
That's one of the key limitations today to computing.
It's about power and energy consumption.
There's a really good example of photonics in the Scallup,
which is the Huawei system.
If you're familiar with it, they have, they use inferior GPUs in their systems
because of various export controls and things like that.
So the GPUs that they use in their system,
at least in the one that was published,
has GPUs that are about three times.
less performance than the Nvidia GPUs.
But they just took conventional, not this most advanced photonics that I'm talking about
with bandwidth densities and all that performance, but they just took conventional
photonics like pluggable, linear plugable optics, and connected the system in the scale
op domain, and we're able to create a system that's approximately twice as powerful
computingly powerful as the invidia NVL 72.
So they start with 3X less performance compute, right, compute processor.
But just using photonics, it just expands the physical domain of your compute capability
because you just can connect more compute to it.
You're not worried about the losses and the distances and all of that.
So just imagine what you could do if you take
the most advanced GPUs that are available in the world and combine them with this new generation
of embedded photonics, copackage photonics. Early predictions, I would say we're talking about
at least two orders of magnitude, two orders of magnitude in performance. That's the reason that
we're so excited about this. For more than two decades, Cool IT has partnered with the world's leading
processor manufacturers to solve the thermal challenges of the most powerful chips on the planet.
Our cold plates deliver the highest reliability and performance, enabling hyperscale and neocloud
clusters to run the most demanding AI workloads. We design and manufacture liquid cooling
products at scale that power the world's top supercomputers and AI infrastructure. See cool IT technology
in action at OCP Global 2025 and learn more at
at cool ITSystems.com.
Are you saying, well, it makes sense, I think,
that if you improve the interconnect efficiency, bandwidth, latency,
coherency, that it allows you to use lesser GPUs,
but more of them to get to the same place,
because the efficiency doesn't drop as fast, et cetera, et cetera.
Is there such a thread there?
It allows you to use as many GPUs as you want,
whatever you have, but,
much, much more efficiently and also to be able to scale up the numbers that if you want to use,
right? So sort of the typical architecture today due to bandwidth limitations is hierarchical,
right? Inside, if I look inside the socket, if I look inside what I mean by the socket is this
typical package that we have with the compute, GPUs, HBMs, etc. Inside there, the bandwidths are
fantastic, on the order of 10 terabytes, communication bandwidth, electronic, all electronic.
As soon as I get outside of that socket and I can connect some of these together via switch
fabric, electronic switch fabric, the bandwidth takes about a 10x drop. I go one more and I build out
a system, so for the scale out, and I take another 10x. I take another 10x hit. So what the
Phatonics will do is bring it into that 10 terabyte, right? Start there. And now I can go anywhere I
want and still have that incredibly high bandwidth that I today have only inside the socket.
So it is eliminating distance essentially. That's right. It's essentially creates a system that
is, we can say it's distance independent. Of course, I don't want to, very important. We still have
time of flight. We have latency.
But we have to do with the speed of light.
We can get rid of that.
There's going to be some limitations on how far we can scale things.
But for AI systems, the sensitivity to this latency is not as severe.
And so the potential gains for these systems and performance are just incredible.
And especially, you know, now that the new measure is going to be, forget about flops or anything like that,
it's going to be in units of power.
You mentioned fimbleds for bit.
You know, I mean, even I saw a recent article.
were open AI, right?
They just have some contracts with AMD and Nvidia and et cetera.
And the contracts are in gigawatts.
That's right.
Right?
So this is the number that matters now.
And so what optics will do is imagine that we can give you 100x performance inside of that
same gigawatts.
Right.
So this is a good segue into a question I had about materials.
What is the state of science research production when it comes to novel materials that are optimized for speed, energy loss, etc?
Or is it all the same because it's all fiber optics?
There's been a lot of advances and a lot still more need to be done for sure.
We can always on the material side.
So the big thing, right, the kind of the dominant thing has been the advent of silicon photonics.
So now we can basically have, we can design, we can fabricate fairly complex photonic circuits,
photonic integrated circuits, you know, picks, as we call them, in conventional seamoss fabrication.
So my lab, as well as many others, you know, we fabricate in 300mm foundries.
There are specific runs, you know, optimized for photonics, but they're basically, they're using conventional tools that are used in seamless fabrication.
And there are several commercial ones, Tower Jazz, Global Foundries, and now TSM, of course.
So that's the main thing.
Now, the bad news is that, okay, so, you know, we wouldn't be talking, we can't talk about
photonics without talking about laser.
Sometimes people forget that you need the laser.
Yes.
Photics is great, but we need the photons to make it work.
And so with all this wonderful things that we can do in silicon, unfortunately,
silicon is not a great material for lasers.
It's an indirect band gap, of course, and we cannot, at least right now, we cannot make lasers in silicon.
So the silicons are 3-5, typically, materials that are used, and so they somehow need to be,
that's why the packaging problem in photonics is and continues to be a big issue.
How do you bring the laser in?
How do you integrate the laser?
I mentioned that we are definitely will need to go to wavelength division multiplexing to get to these kinds of bandwidth densities
that we're talking about. How do you bring in multiple colors of light into the chip?
I can quickly mention that the company of which I'm a co-founder, Escape Photonics,
you know, one of our key novel technologies is that we're able to make things called comb lasers.
We're using a single laser, we're able to generate many colors all at the same time.
So that's one of our key technologies. But in general, the issue of the laser,
is a major issue for getting these, this technology into the systems. And that, of course,
translates into how do you combine? Do you combine? Do you have the 3-5 material outside of the
silicon? Do you combine it with the silicon in some kind of a packaging platform? There are
other companies and researchers that are working on growing 3-5 on top of silicon and other
techniques. So that's a big one. That's a big topic and big issue.
Of course, there are other materials as well.
For example, one of the issues that we're working on in research is how to make the photonic circuit less sensitive or even completely athermal and not sensitive at all to thermal variations.
Any photonics is naturally sensitive to temperature because if you change the temperature, one way or the other, the index of refraction of the material is going to change and therefore your photonic circuit is going to change.
and therefore your photonic circuit is going to change in some way,
especially when you're doing dense wavelength division multiplexing
and using things like resonators, you know,
then it changes pretty dramatically.
And so how do you navigate that?
And are there materials that are less sensitive to temperature?
Materials that can compensate for that.
So again, that opens up.
The future of research is bright because there's many important problems to sell.
there's still things we don't know you're saying this all sounds it just echoes of what's going on
in quantum only i'd say less so in the sense that you're closer than quantum for readiness but
is there going to be an issue say we're three years out these new chips are bursting on the scene
that everybody wants to use them with silicon photonics what about integration of the new chips
with the installed base of copper-interconnect-based chips?
Is that going to be an issue in data centers or in servers?
You mean there'll be like a mix of some things with just copper
and some things with just photonics?
Is that your question?
Yeah.
It's a good question.
My thinking, definitely a good question.
I don't know that I don't have a clear answer
because I would say, you know, the system owner, right,
the system vendor or the operator that has,
maybe the more mature technology in their data center and now wants to add photonics,
I don't know that they're going to keep the old stuff and somehow add the new stuff as well.
More the things that I hear work on are replacement.
So, you know, we're going to, you know, replace the racks with basically, you know,
photonic enabled racks.
I don't know about the operability with, you know, older stuff with the neurophotonics.
It's a fair question.
It's definitely a fair question.
That's a unit anyway in the data center these days, especially with the way Envidio has been architecting them.
Yeah.
I mean, definitely the data centers have fiber.
The fiber installation is an expensive thing.
And they do love to upgrade, you know, without having to rip out the fiber and change.
And I think that's very doable.
That's not going to be an issue so much as inside their own.
rack. Plus, you know, everybody wants to sell new systems, so that's good for business.
That's right. I had a question about also manufacturing. One of the things that keeps coming up
with anything that is analog or analog looking is that it is hard to consistently manufacture
the same specification because it's the AM radio that you have to tune or, you know,
like a piano that falls out of tune. Is that an issue with optical,
interconnects or are systems sufficiently consistent or can be manufactured or are AI enabled to
tune themselves back to the right zone? What does that situation look like? Yeah. So in the context,
right, of these phatonic interconnects, that issue does exist, but a little bit different,
perhaps than, you know, what you had in mind with regards to analog systems. So a typical WDM
phatonic link that we are envisioning would have data modulators and receivers, right,
that are designed to operate at a certain wavelength. And we can combine them together
so that we can have many wavelengths potentially, you know, running through the link. That's
how we get the bandwidth density. We scale in the number of wavelengths. And definitely when
we manufacture these modulators and filters and other components in CMOS fabrication,
sort of conventional foundries and they come out there'll be some variations right there'll be some
we design them for you know this exact frequency but we measure and the resonance is a little bit
off that frequency and that modulator needs to be tuned and so that's absolutely being being
worked on that's really part of the package right now so the circuitry that goes along with
interfacing to these optical chips has typically has two parts. It has the high-speed digital
data part, the data transmitter and the receiver side of it as well. And it also has a bit of an
analog circuitry, control circuitry, that is part of the chip, part of a system. It could be a
separate chip sometimes, depends. But that is used for calibrating, tuning up, you know, getting the
photonic chip lined up to where it needs to be. So that certainly is an issue. And all of these
things can also change with temperature. So these control and has to adjust for that as well. But,
you know, it's something that we're used to doing. We've had lasers, commercial lasers for many
decades. All the lasers have, you know, circuitry that keeps it, keeps them, you know, in tune and so
forth. We've had different, but we've had transmitters for fiber optic systems, you know, for
decades. And so we know how to do it. The challenge for this application for inside the AI system
and the data center is to be able to do it in much smaller footprint, right, to regain that
bandwidth density and with a lot less energy. That's a, it's energy is like the, you know,
first, second, third class citizens, all the designs. Very important. So.
So, yeah, definitely technical, again, not insurmountable challenges, but absolutely part of the process.
Karen, on power consumption, what is your sense of how things will play out?
When these chips come on the market, will data centers run up demand for compute power up to the same level of power consumption that data centers are already consuming now or more so in the future so that it nets out that we haven't really addressed the power problem?
will this make a big dent into the power consumption? I think that, you know, if you have a
data center that you built, you know, with whatever hundreds of megawatts, you're going to try to
use all the power that you're paying for. You're not going to all of a sudden use half the power
or whatever you built. What the photonics will do is enable you to continue to scale your compute
and capabilities inside of that envelope. And so I believe the way that it might roll out is
the initial photonics will not be the most energy efficient that we can make things.
There's still more research that we can do.
There's still more technologies that we can bring to the table, make things even more efficient
than they are, than they will be in my magic year of 2028.
And that will be one, let's say, mark in the ground, right?
Let's imagine that we bring in this first generation of co-packaged photonics,
and, you know, you're going to get definitely a gain in the compute.
you're going to get a gain in the efficiency of how you run the application.
So you can run the application.
Imagine you have a certain application instead of running a certain amount of time.
Now it takes 10x less, right?
So you can run 10x more things under the same power supply envelope.
And as we continue to get better with the photonics, and we have paths towards that.
Right now, there may be more in the research, but they will, they are evolving.
And the photonics will get even more energy efficient.
and you'll be able to do more.
Yes, I'm sure that, assuming that this AI generation continues to grow,
as we expect, more data centers will be built and more power will be drawn.
But what we can do is make things a lot more efficient and maybe bend the curve.
I wouldn't say, we're not going to turn down the power because I don't think the operators
are going to turn it down, but we can bend the curve and make it scale much more reasonably.
Right.
Okay.
What does the global scene look like? When you look at research as well as production,
we mentioned Huawei their matrix, cloud matrix, I think it's called, or Atlas, and their
ascend. We've covered them in our podcast in the past. But obviously, the research on the global
scene is also an interesting topic. Who's good at this stuff besides obviously the US and your
own lab in the US? Yeah, the US is very good. We have, certainly we have leadership in various
things. I would say, you know, China is amazing. You know, they have put just incredible resources
into this. By basically doing what we hope, I hope we can do more of here in the US is by having
combined, you know, very targeted efforts, you know, from the government, industry, and research,
you know, the research enterprise, including universities and other research institutions. And just
pouring resources into this. Are they ahead of us? I honestly don't know. But they, if I look at what's
happening in China, I can see from the papers that are being submitted to conferences and publications
and at least what's in the public domain, amazing, really impressive. So they're a big player on this
and I believe they will continue to be a growing player. The US, of course. And obviously, of course,
there's very strong, very strong work in Europe and in Japan as well, of course. Right.
sort of their strategy really focus on the interconnect aspect as opposed to the pure processing part
to try to catch up with the West on chips. It certainly is part, you know, maybe because the
processor's part was more limited, you know, due to the geopolitical things that are way over my head.
All of us. All of us, right? But one way or the other, you know, it's highly accelerated. It's very
impressive. Last time we talked, I remember we also mentioned fabrication technology.
and I seem to recall that you mentioned for up to electronics,
you don't really need the leading, leading edge of chip manufacturing.
And as you mentioned, like 300 nanometers would do.
I don't really need two nanometers out there.
Has that changed or is that still the situation?
It's still 300 is a little high, but I would say, you know, 90 nanometers and below.
We use fabs that are 180.
We use a fab that's 65 nanometer, so definitely much more mature technology than obviously the two nanometer or even smaller notes.
You don't need that.
And the reason is basic because the optical wavelength is way bigger, right, than the electron.
And so what we really need from the fab is to be able to make the optical photonic structures, you know, the waveguides, the switches, the resonators, everything that we design,
so that we don't have losses. One of the key design parameters is about minimizing loss.
The less loss you have in the phatonic circuit, the less energy you need from the laser,
the less sensitivity you need from the receiver, and therefore the power consumption goes down
and the more bandwidth you can send through. So loss, it's all about the loss budget as well,
you know, as a primary focus. So that's great. And you can achieve that with, you know,
processing technology nodes that are more mature. However, what's also very important, we touched
on that a little bit before, is the variability. And if you use, let's say, you know, a 65 nanometer
node, right, but you use tools in that fab line that are advanced tools, that are tools that are
used for more advanced nodes. Then you get much more consistent accuracy and ability. And that
can play a huge role in the ultimate performance of the pick. I see. So it's almost like consistent
quality of every data path, let's say. Exactly. Exactly. It's also good for packaging,
just to add to that a little bit. So the photonic pick on its own, if I just give you a pick,
there's nothing you can do with it. Right. And turn it on. You can't send it.
any data. There's nothing you can do with it. It's like a brick of silicon. So I need to connect
that pick A to a laser of some type or co-integrate the laser. And I need to connect everything to
the electronic domain, to send data, to power the laser, you know, everything. And so the other
really important aspect is the packaging. Talk about co-package optics. You know, the package
is the assembly and the packaging is very important.
And the advanced tools, being able to do things, you know,
a 300-millimeter wafer scale with advanced tools,
also lets you do things like, you know, 3D packaging
that enables you to co-assemble with electronics in a very precise way.
You can have very small pitches.
You can, all these things enable you to reach better and better bandwidth density and performance.
Right, right, right.
And PIC you mentioned is Photonics Integrated Circuit.
That's what that means?
That's right.
That's right.
Also, I remember from our last conversation that Photonics for communication check,
photonics for computing, I don't know, complexity, may or may not.
What is your latest take on optical computing?
It's a very interesting topic.
I think that at this point in time, it's in the research domain.
And very interesting.
You know, I myself am working on it.
I think that there are some key functions that you can do in the optical domain, that because of the parallelism, the natural parallelism that you get in the optical domain, that could be extremely interesting and accelerating certain compute functions.
And my argument there goes, in the past, right, there's been, there's a long, sometimes colored history of optical computing, like back to the 90s, you know.
what some people did in the past was that, okay, I'm going to build a computer and make
optical, all optical gates and do everything in the optical domain.
And that turned out to be, you know, not a very promising path in the end and kind of crash
and burnt.
Sounds really hard.
Yeah.
It was very hard.
It was exciting to think about, you know, I remember, you know, we did things like optical
logic gates and things like that.
And it was nice research, but compared to what you can do in electronics, even back then,
It's night and day.
But let's say that we are getting to everything that we've talked about today, which is we already have, we've already paid the price of converting to the optical domain.
Let's say we're in a world where embedded photonics, co-package optics is part of the system.
Just like I said earlier about the switches, you know, we can keep going.
So now we already paid the biggest cost is moving from one domain to the other, from the electrical to the electric back.
So we've already paid the price.
We paid the energy costs, everything.
Maybe we can do something with that data in the optical domain.
Now it makes a lot more sense to me.
If it's already there, can be used it.
It's already there, yes.
And so that's the question.
It's still in the research domain, but I think that there are some really exciting things on the horizon.
I was going to ask you about the cost of transceivers, and now they're embedded, they're everywhere.
Are they wire speed and no issues on energy?
or is that still like a lump that one needs to worry about?
Sounds like it's the latter.
The cost of the transceiver, you mean?
Just the cost in terms of time and energy, in terms of latency.
Oh, yeah, no, it's really minimal.
It's really minimal.
I see.
Yeah, it's very minimal.
It's almost instantaneous.
It's like in the picosecond's domain, yeah.
Got it.
Unless, let me just with a caveat,
there are certainly transceivers where the data is much more complex,
like a much higher order modulation format,
It's coherent modulation, all the stuff that's being used in telecom.
No, those are not good for this application because you have to do a lot of signal processing
and all kinds of stuff, which adds a lot of energy and latency.
So that's not what we're talking about here.
Here, when we think about these very stripped down, you know, very typical to like HBC systems
where you have a proprietary interconnect, now it's in the optical domain.
The latencies at the interfaces are really minimal.
right right excellent you know a couple of years ago I thought I had this brilliant idea
that if you take a number and you want to factorize it why you know can there be like a
prism and then you shine it and then whatever comes out on the other side are the factors
so then I go Google it and indeed somebody would had already worked on it
yes it wasn't a new idea so it seems like there yes that and things doing like optical
you know that's what I meant by functions like
things like optical FFTs, other, that naturally use the parallelism of the photonics.
Right. So I wanted to ask you about the company that you have co-founded.
And I think for anybody who wants to go look it up, it's pronounced exscapephotonics.com, if I'm correct.
Yes.
What is it about? What are you guys trying to do? I was delighted to hear about it.
And, you know, as much info as you care to share with us.
Oh, absolutely. And thank you. Thank you for that.
we're very excited. So I and two other Columbia faculty, Michal Lipson, who is the pioneer of silicon
photonics, and Alex Gallera, also a faculty at Columbia, also pioneer in nonlinear optics, and especially
home laser technology, and our other founders, the company, CEO, Vivek. So the company is about
really bringing a lot of what we talked about to the commercial world. And the kind of the key
differentiator that we have is the comb laser technology. So this is, imagine that you have a
single laser that generates simultaneously any number of wavelength channels that are exactly precisely
spaced. You can design and space those channels exactly what you want. Each one of the channels can
deliver relatively high power and it's all driven by single laser in a very, which is a very energy
efficient way to do it, if we sort of trace it all the way back to the wall plug efficiency
and so forth. So this was, this work started when we've been, the three of us at Columbia have
been working together for over 15 years. And in the last number of years, you know, we combined
this comb laser, which generates many wavelengths together with the link architecture work that
I do to deliver everything that we just talked about, you know, the high bandwidth density
We're talking about just to give some numbers, you know, reachable to 10 terabit per second per millimeter
to 40 or even more as we keep going.
And this is edge bandwidth density that you get out of the chip fiber coupled.
Inside, if you're thinking about a panel scale computing system, you can get even higher
bandwidth density.
It's just incredible.
So we decided that this is very exciting.
And a few years ago, we launched a company and are hoping to, you know, you know, you're hoping to,
Obviously, the vision of getting to these very large wavelength counts and bandwidth densities and so forth is on our roadmap.
We have the ability to get there.
And in the nearer term, the more where I was saying where the market is going in the next, let's say, two, three years is a smaller number of channels.
So let's say eight channels or 16 channels.
So that's primarily what we're focused on right now in the company, is bringing that to market.
Excellent.
And the use case for this will be initially in data center.
centers inside racks, et cetera?
Exactly. This is about the fabric, getting a phatonic fabric, you know, within the rack,
or blade to blade, you know, those kinds of distances, yeah.
I see. So really, it just fills the spectrum of what is eligible to go optical.
Exactly. Exactly, right. Yeah.
Excellent. Excellent. Well, good luck with that. I noticed the announcement from Columbia University,
actually, because they were bragging about, you know, the funding that you raise a very
successful round. It was like $45 million or something. So that all sounds excellent. Best of luck
for that. I think that's exciting and advances the state of the technology. Thank you so much. Yeah,
we are. We're super excited about it. And as you said, you know, there's a lot of interest in it.
And we hope now is the hard part, right? Bringing this technology, doing all the engineering work and
really bringing this technology to foreground. And we're very excited about it. Well, you have the
dream team. And you've got the runway. So it all bodes well.
We hope so.
Excellent.
Well, thanks for extending this thing.
Doug, any other topics from you?
No, I was just going to make the comment.
You know, it's hard to raise that kind of money,
but then it's harder to do what you're supposed to do with the money.
So best of luck, all that.
Hardware.
I like to say hardware is hard.
Hardware is hard.
Indeed.
Yes.
No, I think that's it.
That was a great conversation.
And really appreciate you bringing us up to speed in this area, Karen.
Thank you both so much.
You know, this is my favorite topic to talk about.
I can be here all day.
Awesome.
Thank you so much.
What a treat.
Appreciate it and look forward to catching up again later
and running into you at SC.
Absolutely.
That's it for this episode of the At-HPC podcast.
Every episode is featured on InsidehPC.com
and posted on OrionX.net.
Use the comment section or tweet us with any questions
or to propose topics of discussion.
If you like the show, rate and review it on Apple Podcasts
or wherever you listen.
The At-HPC podcast is a production.
of OrionX in association with InsideHPC.
Thank you for listening.