SemiWiki.com - Podcast EP342: The Evolution and Impact of Physical AI with Hezi Saar
Episode Date: April 17, 2026Daniel is joined by Hezi Saar, Executive Director of Product Marketing at Synopsys, Hezi is responsible for the mobile, automotive, and consumer IP product lines. He brings more than 20 years of exper...ience in the semiconductor and embedded systems industries. Dan explores the growing field of physical AI with Hezi, who explains… Read More
Transcript
Discussion (0)
Hello, my name is Daniel Nenny, founder of semi-wiki, the open forum for semiconductor professionals.
Welcome to the Semiconductor Insiders podcast series.
The guest today is Hezzi, Sarr, executive director of product marketing at Synopsis.
Hezhi is responsible for the mobile automotive and consumer IP product lines.
He brings more than 20 years experience in semiconductor and embedded systems industries.
Welcome to the podcast, Hezzi.
Hey, thank you for having me.
You know, I just noticed you've been at Synopsis for more than 17 years.
That's quite a, quite a tenure.
Can you tell us a little bit about your journey?
Yeah, sure.
So I started from a semiconductor space and then moved to the IP space.
I was actually heading the BP product line when I joined Synopsis.
I grew up to a very healthy and very profitable position.
Now I'm already an overlooking multiple markets, mobile automotive and consumer,
So more than just a single product client are actually looking for a variety of markets.
So the market has evolved to a level where it's really the innovation is happening in so many applications.
So and very very challenging as well to service going forward.
Yeah, I spent probably more than half of my career in IP.
I tell you, the most rewarding part is IP moves so fast and it's so critical.
And with AI, I think that's the case as well.
You know, I keep hearing about physical AI.
So, Hezzi, can you define it and describe what's driving the interest?
Sure.
So I think the AI that we mostly hear about and the buzz is about AI and high performance
compute and the cloud.
And that is mostly for training.
Some is also for inference.
When we talk about edge AI first, AI is running on device itself.
so in the palm of our hand or it's really interacting with humans.
When we talk about the physical AI, it's a subset of that,
which means applications that are actually interacting with humans.
So they're not only performing in the digital domain,
these kind of autonomous systems are able to understand, reason,
and perform or orchestrate complex actions in the physical world.
So they need to have the ability to perceive the world through computer vision or any other sensing capabilities to understand natural language instructions that are being provided to them through different means, then translate the instructions into physical actions and perform.
So we're talking about typically applications such as self-driving cars, robots, etc.
Yeah, it's a big market.
But can you touch on the evolution of AI models at the edge and what is needed to support the physical AI needs?
Yeah, so when we look at what we are commonly using, it's more LLM, so large language model that processes text to recognize, understand, and generate human language.
So in that context, we provide a text input and then we receive a text output.
So we can ask a question, then LLM will understand the language that is being used and providing us text as an output.
The next stage of evolution is going to a vision language model.
There are different models in that group, but if we're generalizing it, it's a specific type of LLM that processes both text but also visual data.
So both side of the inputs are coming into that model and they describe and analyze the image in the video store.
For example, you can look at an image and then can ask a question, what is, let's say, unusual in that image.
So the model itself is able to recognize both the language and the text that was provided as well as the visual.
When we're discussing physical AI, and in a physical AI, what is added is,
is the component of action.
And we're discussing a vision language action model.
That's VLA.
So extending the VLM by adding the ability
to generate the physical action.
So for robotics, so if we're discussing robotics
or any autonomous system, then let's say there is
some kind of task that is being provided
to the vision language action model,
and there's some visual.
that is provided as well.
So we're looking at, let's say, figure out a humanoid
that is being received an audio and a command
to, hey, put the ball in the box.
The humanoid has visual capabilities
to see the box and the ball.
And through those means of commands as well as the visual,
the output will be actually an action
that the robot would actually move and put the ball in the box.
So this kind of evolution is enabling AI in the physical domain.
Interesting.
Can you also explain the design constraints in the spectrum of physical AI applications
such as latency or bandwidth power?
Yeah, and this becomes really interesting for IP and semiconductors and electronics in general.
So when we look at the spectrum of the physical AI domain, we're looking into systems that have a simple task, let's say, an arm, robotic arm, then vacuum cleaner, and then say a medical kind of robotics, mobile robotics, drones, human noise, up to an autonomous driving car.
And for all of them, of course, there are different needs.
However, all of them are very common in their needs to have low power, and that is a necessity
across the board, regardless if it is battery operated or it can be charged immediately or
over time.
What's different between them is the requirements for computing power, for the computer
vision capabilities, so frames per seconds that provide them the visuals that will be able to
comprehend the environment, so that will generate effectively inferences per second.
The ability to generate what we call AI capabilities, so time to first token, how fast they
should respond, so the latency to the response that is needed, how critical that is.
And then the bandwidth driven by the LLM, so that's the number of tokens per second.
So once you started getting the AI inferences, then how much more data would you need?
So when you're driving a car, for example, you would need to make decisions and it depends.
What's the environment?
So that's really what's driving some of the requirements.
I would think we can look at it more in high level and system level.
So power, thermal, always on constraints, so all this kind of devices, both physical and edge devices,
operates under very tight power and thermal budget.
So the AI workloads must deliver the tops per watt.
That is really the metric that we're looking for.
So it's not just peak tops, but actually accommodates the environment and the ability to deliver
the energy and the heat.
Memory capacity because and bandwidth bottlenecks are very critical when we talk about AI.
So on-device, the RAM capability, memory, bandwidth, data movement.
So not just compute, but actually the memory bottlenecks or something that is actually determining the or limits, the inference performance.
I.O. sensor data explosion because you need to be able to be able to.
to connect to multiple devices or multiple sensors or cameras, radar, audio, any industrial
sensor that depends on the environment where that physical AI is operating it.
So there are many requirements there, but like high-resolution video, multi-sensor fusion,
a great sustained ingress bandwidth pressure.
So all of this is to be co-optimized to ensure there's no system bottlenecks,
deterministic latency in real-time response at the end of the day, those physical AI machines
operate in an environment that interact with humans or in potentially dangerous environments,
industrial kind of environment.
So we need to make sure that they're providing predictable, valid latency that is acceptable
by the application where it's supposed to be operating.
Physical AI potentially possesses security risks.
So security or privacy or safety, all of this kind of regulatory requirements are either being
developed or need to be accommodated right now.
So that comes alongside the AI and the general, say, requirements.
But in the physical domain, all of this needs to be taken into consideration.
And we touched about the spectrum of physical AI, but the system complexity really depends on the kind of application.
You cannot really do a one-size-fits-all architecture. You need to have a hardware software,
code design, and ensure that what is being developed is really addressing the market need.
Right. Yeah. So at what point does the physical
AI architecture start to look more like an automotive ADOS or, you know, what does that imply
for SOC reuse, you know, and standards and strategies and multi-di-di, etc.?
Yeah, so we're seeing the physical AI as an emerging applications and some applications,
some approaches the SOC use for ADAS is also or can be used also for, say, high-end physical
AI kind of electronics.
The AI capabilities potentially can be shared.
So for a level 3, A-d-S, level 4, ADAS for self-driving,
it can be reused in order to understand the environment
and make decisions based on that in terms of movement,
in terms of what's allowed to do.
Also the interactivity with the humans,
such as in the face ID,
and all the digital assistance is something that can be reused.
Also, the software-defined capability that are coming.
However, there are differences, of course, between them.
So the complexity, you can say similar and some reuse can happen.
But as with ADAS, you have high-end ADAS or low-end ADS.
Sometimes you have IVI functions that are sometimes are not needed in a physical AI robotics, for example.
So they are a function that can be reused.
And in order to enter the market with capable, say, call it humanoid, I think that the most part it is possible to reuse silicon.
However, as we mentioned before, the challenges for automotive are different than a generic physical eye domain.
The safety criticality is different when you're talking about physical eye versus a
car. So when you look at physical AI, it's typically a fusion between mobile application
processor and mission critical computing. So we see from this this needs from both
worlds while in automotive ADAS it's really just a mission critical computing and
less about the multimedia set of things. So sensors and multiple sensors the ability to
sense the world. It depends on the spectrum of physical AI robotics you're going
after. It would potentially require more flexible system there. The connectivity
inside the robotics itself will be slightly different. You have more
connectivity to the user. So let's say for example USB connectors or potentially
display port connectors are usable more with the robotics.
that has more interaction with the human beings.
And I would think also the chip-to-chip connectivity
is there.
So either PC Express or similar to that needs to be done.
So there are similarities, but in order for the robotics
to be really useful in the spectrum of applications,
there will be a future need to do
and more application-specific innovation
in robotics.
So you mentioned connectivity.
How do open standards like PCIE, MIPI, UFS, and emerging robotics frameworks to help accelerate
innovation or reduce the risk and time to market?
Yeah, so let me touch on that quickly.
I think the most important thing in the physical AI is the AI machine, we call this way.
And that really has the computing power, but the memories, so storing the LOM of the
visual language action model is critical. A mix of experts depends on the
application itself, how you are using it. So using the storage such as JETEC UFS is
critical that has high bandwidth and low latency. Same time you have to have like
very high bandwidth access to LPD to DRAM using LPDDR6. So using the
latest and greatest JETAC UFS and LPDDR specs are important.
to have the capable machine that will allow the AI to reason and to create an action.
Then if we look at MEPI specifications, of course, cameras are all over and ability to bifurcate
and actually have multiple cameras that behave differently depending on the application is
important for SOC reuse.
I want to touch about UCIE because we didn't talk about that in the previous
question and UCIE really enables to have a variety of application you can have a
base die for the robotics product line and then you have different dyes or
chiplets that will have different identities so you can go after the high-end
market by adding homogeneous or hydrogenous kind of dye that would allow to have
more capabilities on the acceleration or have more IOs or have more
both. So the ability for standard-based interfaces to enable SOS to connect to the external world
to provide the scalability with the constraints that were discussed before on a system level
is critical to achieve the market needs. Well said, absolutely. So just to finish up,
as a, can you summarize the opportunity ahead of us and what do we as an industry need to do
to service it properly?
Yeah, so we see it as an IP space,
also in the semiconductor space
with customers that are already joining
the development cycle for physical AI.
It's a rising market definitely in the last couple of years
and the next couple of years,
there are projections of Cargher of more than 55%
with up to 2035 with like more than
50 billion US dollars market size. So the opportunity there in the in the in the
AI market which is not the high-performance compute is is is tremendous. So this is
really what's create the excitement from the electronics market and semiconductor space.
However we said right now this is just at the beginning of it so we are seeing
most of the systems today are built on existing architectures either from
reuse for mobile, from automotive, or any industrial systems.
The next steps that is being developed right now or being considered is migrating for more efficient systems.
So stuff that can be commercialized, optimized for the environment itself, for power, cost, size, but also what we discussed before.
System level, optimization, and co-design for hardware and software.
So all of that needs to happen.
The transition for this kind of, say, semiconductor that are more optimized for physical
AI is something that we as an industry need to embrace and use the experience that was
accumulated across multiple applications and try to create the best out of it.
So while we do have some standardized interfaces that can be reused, I believe some of
them will need to be tweaked or worked out and potentially even creating new ones that will be
better servicing the physical AI. So we as consumers in the end of the day will be able to use
physical AI electronics that will be safe, clean, not too costly, and will service us in the way that we
want. Great conversation. Hezzi, I always enjoy you when you visit. Please come back again
and let's talk more IP when you get a chance.
Thank you, Daniel.
Appreciate it.
That concludes our podcast.
Thank you all for listening and have a great day.
