Semiconductor Insiders - Podcast EP334: The Unique Benefits of LightSolver’s Laser Processing Unit Technology with Dr. Chene Tradonsky

Episode Date: March 6, 2026

Daniel is joined by Dr. Chene Tradonsky, a physicist and the CTO and co-founder of LightSolver, where he leads the development of a proprietary physics-based computing system built on coupled laser dy...namics to accelerate compute-heavy simulations and other computationally demanding workloads. Before moving into physics,… Read More

Transcript
Discussion (0)
Starting point is 00:00:07 Hello, my name is Daniel Nenny, founder of SemaiWiki, the Open Forum for Semiconductor professionals. Welcome to the Semiconductor Insiders podcast series. My guest today is Shen Krodonski, a physicist and the CTO and co-founder of Light Solver, where he leads the development of a proprietary physics-based computing system built on coupled laser dynamics to accelerate compute-heavy simulations and other computationally demanding workloads. Before moving into physics, he started.
Starting point is 00:00:37 in electrical engineering, a combination that helps him bridge advanced computing and complex physical systems. Welcome to the podcast, Jen. Thank you, Daniel, for having me. I'm really excited to be here and talk with you today. I'm excited as well. It's a great topic. So to start, please tell us a little bit about your background and your company Light Solver. Well, I started in electrical engineering, and then I move into physics. While I'm working in my master degree at Technion, I worked on semiconductor quantum dots, which give me a strong foundation on quantum system and photonics.
Starting point is 00:01:19 So my interest positioned me at the intersection of engineering and complex physical systems. But when I think about what took me in that direction, well, I always been fascinating by this idea. Can we tackle computationally demanding problems like heavy simulation by taking physics-based approach, building the right physical system, system that mimic the original process instead of forcing everything for digital computers. But while I was working in my PhD, it fights my institute that all came together for me.
Starting point is 00:02:03 I worked on Kappa Dazas network. Originally, I started my study on emergent network behavior like synchronization and pattern formation across different apologies and those laser networks are actually controllable physical systems and I realized that can mimic or simulate other physical system as an alternative approach to digital computing and much much faster I know I was behind academic university and I was moving down the past to a new computing paradigm. And fortunately at Weitzmann, I also met Wuttyman Shlomi, was working on real quantum system, specifically interactions of cold atoms and cold ions. And we share the same intuition that physics-based dynamics
Starting point is 00:02:55 could be the foundation of practical computational platform. And in 2020, we founded light solver and we built on the promise of developing is physics-based processors built on capital disasters dynamics. To accelerate compute-heavy simulations and other competencies demanding workloads, that was the genesis of the laser-posing unit or the LPU. Interesting. From what I understand, optical and photonic processors are being experimented with as accelerators for AI. But you've argued that one of its most important near-term uses will actually be in science.
Starting point is 00:03:37 scientific simulation. What's drawing optical processors towards solving partial differential equations instead of just, you know, chasing AI workloads? And that's an excellent question. Optics and photonics processor are absolutely being explored for AI. And for the core idea makes sense. Optics can do certain operations very efficiently, like free-air transform or vector matrix multiplication. The practical challenges is when large AI systems demands extreme scale, precision and stability. And in many photonics AI concept, you still need substantial electronic control, calibration, and optoelectronics IO conversion.
Starting point is 00:04:26 And when you look on the full system, those overheads can reduce or eliminate the end-to-end advantage. that's required careful co-design and still need some research breakthrough to reach practicality and scale up scientific simulation is different and this is where we be focused a large fraction of the HPC or high-performance computing workloads are in partial differential equation based simulations they sit at the core of engineering and science from you know fluid dynamics and heat transfer to a multi-physics systems like weather forecast and fusion reactors. Those types of simulations are the foundation of critical applications such as aircraft or automotive design energy system and advanced manufacturing and this is where we excel
Starting point is 00:05:29 our laser processing unit or the LPU is not a photonic version of conventional computers. It's fully parallel physical system. All variable represents simultaneously. Inside this cup of laser cavity and the interactions are implemented as engineering coupling between those days and light from one light laser to the other in controllable fashion. So the state of the entire system evolved at once in hardware, without constant memory fetch and data movement that you see in digital architecture. This is one of the major bottleneck in HPC phases today. In practice, the problem, in practice you program the coupling of a specific PDE-based task
Starting point is 00:06:20 and then let physics evolve and to fall to the stable state, and then you just read out the result. This improve both time to solution and energy efficiency for most demanding part of the task. Okay. So let's level set for listeners who don't live inside HBC every day. What are partial differential equations and why do they sit at the heart of so many simulations in industries like aerospace, automotive, and climate science? So partial differential equation or PEDs are the mathematical language that used to describe
Starting point is 00:06:59 how physical quantities changes across space and time. Instead of single number changing with time, you describe an entire field like velocity, pressure, temperature, or stress defined everywhere in three-dimensional space and evolving over time. That's why PTE sit at the heart of so many industrial scientific simulations. Airflow over wing, heat transfer in an engine,
Starting point is 00:07:29 mechanical stress in a structure, climate and weather models, and even parts of chip design, like heat dissipation, and electromagnetic fields are already used to PDEs with boundary condition and material properties. Good example is a turbulent airflow around the wind. The wind is in meters in size, but the thin boundary layer determine the drag and stall can be orders of magnitude smaller.
Starting point is 00:08:00 To capture both scales, you discretize the space into very fine 3D mesh and step the solution forward in time. And each time step will require solving large couple equations that they represent repeatedly until the solution is stable. This is why PDE simulations consume enormous amount of energy and compute. And why, as we increase the fidelity and resolution,
Starting point is 00:08:37 the computational cost goes dramatically, often faster than the performance we gain from each new generation of hardware. So you mentioned that optical and photonic processors will start moving out of research labs maybe this year and into real operational environments. What does that shift look like in reality and where do you expect these systems to show up first? So I don't think the move into operational environment would look like, you know, sudden revolution or complete replacement of digital computing paradigm.
Starting point is 00:09:13 It would look like a pragmatic insertion of existing into existing workflow, where optical processors are used as specialized engines and are judged by measurable time to solution. and energy to solution. I think we will continue to see photonics adopted in the I.O. and interconnect, but the more interesting shift beyond I.O. is when photonics start contributing to computation itself. In practice, it would show at first as domain-specific accelerator that plug into existing
Starting point is 00:09:51 HPC pipeline rather than replacing them. The early deployment will be in a small-scale pilot inside HPC centers with industrial simulation team focused on models that are most intensive in terms of runtime and power consumption. And the key is not doing everything optically, but offloading part of the parts where the physical system can evolve in a... parallel efficiency and deliver stable and repeatable outcome quickly. I also expect physics-based, free space optical systems to appear early in those pilots
Starting point is 00:10:39 because they can be packages, appliances, integrated with standard software workflow and evaluated in real industrial problems with clear acceptance criteria. And I think the winners would be the system that can integrate, that they cleanly show repeatable gains, and come with realistic validation path. Okay. You also introduced the concept of physics native computing as a new category of hardware. What does that term mean in simple terms? And why is now the moment for it to emerge alongside CPUs, GPUs, and even, you know,
Starting point is 00:11:18 the coming quantum systems? So, sure, let's explore that. native computing when using controllable physical system in as a computing engine instead of simulating everything step by step on a digital processor you program the interactions of the physical substrate with dynamics matches the mathematics matches the mathematics structure that you care about you let it evolve and then you read out the results this concept is not new in spirit engineer have long use physical models as computers like wind tunnels in aerodynamics or analog electronic circuits
Starting point is 00:11:59 for dynamic dynamical systems what hold many of those approaches back from being accepted as general computing tools was programmability it was if each new problem require building new physical setup you can scale beyond a narrow use case is changing now is significant we understand how building physical native systems can that are programmable and repeatable enough to be reused across class of problems and this is the game change here the programmability optics is a good example in optics it enable you to represent many variables simultaneously and engineer the interactions between them in a reconfigurable way so some hardware the same hardware can can be adopted for different types of
Starting point is 00:13:10 tasks rather than locked into a single task We also now see this parallel to quantum computing. One of the original motivation for quantum computing, as a Feynman noted back in the early 1980s, was that is to simulate quantum systems that are extremely hard to compute classically. Physics native computing follow similar high-level principle, but in our case, the target is classical computing workloads. The point I want to emphasize is that there is very little overlap between those systems. And you can think that you can choose to run on quantum computer and what we can run on our system.
Starting point is 00:14:00 So both can emerge as a different complementary approaches alongside CPUs and GPUs. Okay. So you're describing a gradual approach of integrating optical engine. to existing HB infrastructure, which makes sense. How should simulation teams think about integrating optical hardware into their current tool chains to gain speed and energy efficiency without breaking what already works? That's a great question.
Starting point is 00:14:30 There is no overnight replacement of HPC. The simulation team should not think about the optical hardware as a new monolithic platform. form that forces rewrite everything. The winning path is hybrid to introduce an optical engine as a complementary compute component inside existing workflow. Simulation teams should think in terms of partitioning in the computation. The digital side still does a lot of heavy lifting, meshing, a discretization choices,
Starting point is 00:15:12 multi-physics coupling, pre- and post-processing and verification, and it also remained a system of record for regression test and accuracy matrices. The optical engine is then used selectively for the parts of the computation that matches this fence. For example, tightly coupled updates over many variables that can evolve simultaneously in physics-native way. Think of iterative computation in your workloads, for example, Poisson or Wave or Naviour's Stox equation. If you're familiar with those types of partial differential equations, you can think of it as well-defined compute primitive or a small repeatable computing task inside a broader pipeline to measure end-to-end and time to solution and energy to solution while maintaining
Starting point is 00:16:12 a clean fallback path to the digital baseline. This that teams adopt incrementally. They do not break what already works. They add our accelerator, the optical engine, validate it on a substrate of workloads, and expand usage only when it's stable, scalable, operationally simple enough to integrate. Final question, Shen. How do customers normally engage with you and your company? I mean, your website is lightsolver.com. Yes. What else can they do to get in touch with you? So they can contact us through the website. They can learn a lot about us through the website. There's a lot of information there.
Starting point is 00:17:03 And they can contact us and we can give them access to additional material to learn how to work with the LPU and they can use it as a background boss for education and also contact us directly and work on our LPU 100 we have LPU lab accessible already for early adopters so if it's a very relevant to us also we'll give them access and with that I want to thank you for that for this opportunity yeah thank you for our guest. It was a great conversation and I hope we can keep in touch. Let us know how your progress is going and, you know, I'll help to talk to you again soon. Yeah, it's really exciting to be here and thank you again. That concludes our podcast. Thank you all for listening and have a great day.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.