Semiconductor Insiders - Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh
Episode Date: July 25, 2025Dan is joined by Rimpy Chugh, a Principal Product Manager at Synopsys with 14 years of varied experience in EDA and functional verification. Prior to joining Synopsys, Rimpy held field applications an...d verification engineering positions at Mentor Graphics, Cadence and HCL Technologies. Dan explores the expanding role of static… Read More
Transcript
Discussion (0)
Hello, my name is Daniel Nenny, founder of semi-Wiki, the open forum for semiconductor professionals.
Welcome to the Semiconductor Insiders podcast series.
My guest today is RIMPI, Chook, a principal product manager at Synopsis with 14 years of varied experience in EDA and functional verification.
Prior to joining Synopsis, RIMPY held field applications and verification engineering positions,
at Mentor Graphics, Cadence, and HCL Technologies.
Welcome to the podcast, RIPP.
Thank you, Daniel.
Happy to be here.
So can you give us a quick update on the status of static verification in today's design flow?
Sure.
So static verification as a domain has, just for knowledge of the audience,
has gained immense importance from the designer world over time.
And today, what we have seen is that the designers want to identify and fix maximum bugs
earlier at RPL, say the bugs like combination loops, multiple driver issues.
So they want to get to a synthesizable CDC clean or RDC clean RPL faster and expand
that scope further.
So in general, the core benefit of static verification has been to pinpoint
design issues early in the design cycle and that too at the lowest cost
possible so in general what we have seen is that the link sign-off is part of the
code check-in process by the designers and this is this is the general norm at
almost all the semiconductor design companies and on top of that what we
have seen is that given the criticality of missing CDC and RDC bugs
The impact of missing them is pretty high, as high as it can kill the chip, the overall functionality of the final taped-out design.
These bugs are typically not caught by any other tools.
So that's the whole background on the static verification flow.
What we have seen is that in today's design development flow, the RTL designers must ensure, as part of their RTO sign-off
checklist, they must ensure that the design is Flynn, CDC, and RDC clean.
And even glitch clean, which is more like the new area where we have seen a lot of issues
in the latest designs.
So these kind of issues are the ones which are part of the RTHL checklist where they need
to be fixed during the static verification phase.
And the main goal here is to fix these with the lowest cost or the lowest cost or
minimum investment early in the design cycle while the impact of this
addition this minimum investment early in the design cycle is very very high so
that's why it's like a standard norm by every art and designer to get done with
these efforts early in the design cycle and how are the latest technology shaping
static verifications impact on the development cycle today you know and what
new solutions are available that can enable more efficient
RTL sign-off for designers?
Sure.
So, static verification space has innovated a lot over the years, and the innovation
has been along more like along three major dimensions, where we'll, like, connect with
the previous question where the designer aimed to achieve the shift left by the sole goal
in mind of developing first-time-right silicon.
So in these three dimensions that I was referring to, the first one is, you know, the first one is,
is more like a core foundational innovation
in the static verification space at synopsis,
which has been to be able to scale
to huge billion-plus designs
because the designs are very, very huge and complex.
So we need to scale to these billion-class designs,
and while we do so,
we also need to ensure that the designer is able
to analyze millions of violations
in a realistic time frame.
It should not be that the designers take months to go through the long list of violations
or the millions of violations that they end up, they have to analyze.
So as part of this effort, basically in order to meet this ask or meet this demand from the industry,
we see SpieGlass, which is the synopsis static verification platform, is now multi-core,
multi-mode enabled, solely to address the scalability demand from the industry.
And to address the need for designers to be able to assess millions of violations,
we also have a new technology which is pretty much adopted by various customers for far,
so that that goes by the name of machine learning based root cause analysis.
What it does is that it allows the designers to be able to debug cluster.
Basically, it's a cluster-based verification.
where each cluster helps the user to assess a group of violations
and these group of violations within a cluster
has a common root cause.
So for designers, when they fix one cluster,
or let's say it is a group of violations,
if they fix one root cause for a cluster,
they are able to fix multiple violations in one go.
So that's how it allows the,
allows the designers to achieve 10x debug productivity
and sometimes even higher,
especially in the early design verification space,
sorry, early design development phase.
Coming to the second area or the second dimension,
then moving beyond the core foundation
or foundational innovation that I was
referring to in the first dimension,
synopsis has focused on maximizing the scope
of identifying more and more bugs,
because that's the core bread and butter of the technology
where the aim is to maximize and identify all possible bugs
early in the design development phase.
So what we have done is that there are new applications
like RDC and Glitch also, which I was referring to earlier,
which has gained more momentum in terms of usage
by the designers.
The designers are now able to actually maximize
this state space.
in the space of bug finding.
So there, so essentially a single glitch,
just wanted to highlight the impact
of glitch verification,
where a single glitch can again be deadly
for a chip functionality.
And what we have seen is that ultimately
when we ask a certain customer to let's say
consider glitch verification because it's,
because of the impact on the overall functionality
of the design,
There have been interesting conversation with designers in this space where
when synopsis used to explain the criticality at times because of bandwidth issues or
scheduled pressures at the designer's end it was overlooked by a certain set of customers.
But when unfortunately they faced some, let's say, glitch-related silicon bugs,
then it is then when the customer realized the importance of glitch verification
and they suddenly jumped to the idea of glitch verification and wanted to deploy it right away.
So that's what we have seen in the space of second dimension of the innovation fund.
So furthermore moving to the third dimension of innovation, we have expanded the
RTO sign-off effort to enable shift lift beyond the traditional RTO sign-off checklist
by additional analysis of new unique issues,
leveraging hyperconversions between static and the digital implementation technologies.
So in this case, the unique issues are the ones that may typically surface later in the design
implementation phase or on the final net list, but could be preemptively addressed,
or the text and balances can be put in place explicitly before the RPL's handoff to the implementation,
phase. So in this category of hyperconvergence technologies, we have pioneered two innovations,
namely, that is, implementation design checks or IDC, and CBC-AWare synthesis, which helps minimize
and minimizing the iterations between static sign-off and the final digital implementation.
Lastly, a quick sneak peek on the latest innovation in the Gen AIS space. We have been working on
agent-assisted-wind sign-off as a key disruptor, as a next key disruptor.
in the innovation effort or innovation focus from our side as synopsis.
Okay, and can you talk a little bit more about the need for IDCs and how it impacts the RTL sign-off checklist for designers?
Sure. So implementation design checks or IDC, it helps the art guild designers to ensure that there is no
unintended block of registers that will be optimized later during synthesis.
during synthesis.
And the designers get to fix such gaps in the RTO code
earlier in the RTO cycle itself.
So historically, what has been happening in this,
to address this gap is that the designers are highlighted
such gaps in the code by implementation engineers later,
where the implementation engineers have, let's say,
done the manual and intensive debugs of millions
list of registers by eyeballing the locked files
Confucian compiler or any other implementation tools.
And this historical effort by the implementation engineers
is actually very error prone and difficult as well.
What this technology or IDC offers is the precise pinpointing
of the root cause of register optimization.
And that spans across several layers of hierarchy.
So getting to what needs to be fixed
the article code will become very intuitive for the designers when using IDC.
Since it pinpoints to the relevant details, so it gives all this extra information to be able
to debug faster, which is in terms of waveform, schematic, which can span across multiple layers
of hierarchies.
This becomes very useful, especially in the complex scenarios where the user wants to get to
the root cause of what was the reason or what portion and the, you know, the,
of the code actually called my block of registers
to simply vanish away during synthesis.
So a user gets to do this analysis sooner.
So in summary, IDC actually helps in shift left
and detection of these issues
and get to the root cause analysis
of these complex scenarios sooner in the design cycle.
Interesting.
So how does the new CDC-aware synthesis flow
in VC spyglass?
improve the handling of clock domain crossings, and what benefits does it bring to the overall
design cycle?
Sure.
So historically, what we have seen is that there are error-prone methods such as arterial
phragmas or manual synthesis directives being used in the digital implementation phase, so as
to be able to protect different CDC paths from being modified in such a way that CDC issues
crop up in certain parts in the design again. Such pragmas or actual manual synthesis
directive, when used by user, it can lead to two different problems, which are largely
over-constraining of the design or sometimes under-constraining of the design in combination
with constraints, of course. What happens is, let's say, then the situation of over-constraining
is experienced. What we have seen is that the PPA achieved for the design may be suboptimal.
So essentially the user could have achieved more PPA, but due to over constraining, the final
PPA of the design was, let's say, suboptimal. Additionally, the impact of over constraining is
that the digital implementation tools end up trying harder to resolve these design constraints,
over constraining and eventually they end up running longer than they could have completed the job
faster if the design was properly constrained. The other side of this coin is that the design could be
under constraint. So if the pragmas or the manual synthesis directive that are provided by the user
during the implementation phase, if that situation, if the number of constraints that are
provided are lesser than the user should have been providing.
In that case, it implies that some of the design,
CDC design parts are not protected.
And there might be a possibility that Glitz
gets introduced for such parts during synthesis transformations.
And essentially, exposing your design
to post synthesis CDC bugs on the net list.
So that means a CDC was reintroduced during synthesis.
So how CDC-aware synthesis helps in this case
is that it will help the user to automatically
generate synthesis directive during the CDC verification
using V-E-SpyGlas.
And these comprehensive synthesis directives
can be later consumed by fusion compiler directly
so that the fusion compiler is now fully aware
on what kind of transformation are allowed
on which CDC part.
So overall,
CDCA by synthesis has two core benefits for the designers.
The one is to reduce manual burden to define such synthesis directive.
And the other one is to maximize the PPA for the design by ensuring that each and every CDC
part is protected and is not impacted by the RTL to NETLAIS transformation.
And how do advancements in VC spyglass technologies align with the growing demand for faster
time to market and higher design quality in the semiconductor industry?
Sure. So this is in continuation to all the different innovations that we discussed in the previous
questions. So with the growing demand for faster time to market, the scope and importance
of at your sign of methodology has grown multifold. Today, companies want to find all
kind of work that they can find before synthesis. Say, for any of the bad
coding practices at RTL. Over time, let's say if they realize that there are these
certain bad coding practices at RTL which caused issues when trying to meet power goals
later. So at times people identify these kind of bad coding practices and they learn that,
okay, it might make sense to have an additional lint check at RTL. And that has transformed
For us, in the BC's five-glass link space, it has transformed in terms of, let's say, power-lint checks.
So in this case, the user has to ensure that they enable this power-lint methodology,
and that will help enable them to fix these power-lent issues earlier at RPS.
So from effort-wise, it is very low, but the impact on the overall design cycle in terms of meeting power goals
or in terms of achieving high quality end product or final silicon is pretty high.
So that's one example for the key innovations that we have done in the past.
Similarly, there have been groundbreaking innovations in terms of multi-core processing,
multi-machine processing, and very recently we have introduced the distributed CBC flows as well,
which aims to deliver faster time to market so that the designers can do their CDC analysis faster.
Great.
Last question, how do you see AI and machine learning shaping the future of static verification?
And what role will VC spyglass play in this evolution?
Sure, Daniel.
So I briefly mentioned about agent-assisted link sign-off earlier.
So let's delve into that the details of the same.
So sometime last year we actually introduced synopsis lint advisor, which is powered by Jenny I.
And leveraging lint advisor, we are working today with key partners to deliver automatic
lint fixes.
So for example, the tool can deliver automatic fix of RPL for, let's say, thousands of violations
in one go and which can be a very powerful application
for designers.
This essentially allows them to skip the part
where they go and address each individual violations
for Lint one by one.
So the power here lies in is that they can fix
thousands of violations in one go and they just
have to quickly review it later and get done
with the LYNC sign off.
The next phase of LintAdvisor is actually
Egentic AI workflow for LINC sign-off.
and a proof of concept for this workflow which illustrates synopsis vision for agent engineers technology,
which is a multi-agent flow for synopsis of synopsis combined with the Microsoft Discovery platform.
This workflow was actually demonstrated recently at back US 2025.
One of the agents within this multi-agent flow corresponded to the design verification space,
One of the agent within this multi-agent flow was the Lint agent or Lint agent engineer in the grand scheme of overall collaboration or the workflow that we've demonstrated at Dax.
And this space is actually evolving by the day and there are interesting times ahead.
It could be, it has the potential of being a big disruptor for DATIll designers and even other user personas.
So for our audience today, I would say that stay tuned for more groundbreaking innovations in this space.
Great.
Thank you very much for your time with you.
It's a pleasure to meet you.
Same here.
Thank you, Daniel.
That concludes our podcast.
Thank you all for listening and have a great day.
Thank you.