SemiWiki.com - Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh
Episode Date: July 25, 2025Dan is joined by Rimpy Chugh, a Principal Product Manager at Synopsys with 14 years of varied experience in EDA and functional verification. Prior to joining Synopsys, Rimpy held field applications an...d verification engineering positions at Mentor Graphics, Cadence and HCL Technologies. Dan explores the expanding role of static… Read More
Transcript
Discussion (0)
Hello, my name is Daniel Nennie, founder of SemiWiki, the open forum for semiconductor
professionals. Welcome to the Semiconductor Insiders podcast series.
My guest today is Rinpi Chouk, a principal product manager at Synopsys with 14 years
of varied experience in EDA and functional verification.
Prior to joining Synopsys, Rimpi held field applications and verification engineering positions
at Mentor Graphics, Cadence, and HCL Technologies. Welcome to the podcast, Rimpi.
Thank you, Daniel. Happy to be here.
So can you give us a quick update on the status of static verification in today's design flow? Sure. So, static verification
as a domain has, just for knowledge of the audience, has gained immense importance from
the designer world over time. And today, what we have seen is that the designers want to identify and fix maximum bugs early at RTL,
say the bugs like combination loops, multiple driver issues. So they want to get to a synthesizable
CDC clean or RDC clean RTL faster and expand that scope further. So in general, the core benefit of static verification
has been to pinpoint design issues early in the design
cycle, and that too at the lowest cost possible.
So in general, what we have seen is
that the link sign-off is part of the code checking
process by the designers.
And this is the general norm at almost all the semiconductor
design companies.
And on top of that, what we have seen
is that given the criticality of missing CDC and RDC bugs,
the impact of missing them is pretty high,
as high as it can kill the chip, the overall functionality of the final tape-out design.
These bugs are typically not caught by any other tool.
So that's the whole background on the static verification
flow.
What we have seen is that in today's design development
flow, the RTL designers must ensure,
as part of their RTL sign-off checklist,
they must ensure that the design is clean,
CDC and RDC clean, and even glitch clean,
which is more like the new area
where we have seen a lot of issues in the latest designs.
So these kind of issues are the ones
which are part of the RTL checklist
where they need to be fixed
during the static verification phase.
And the main goal here is to fix these
with the lowest cost of minimum investment
early in the design cycle
while the impact of this minimum investment early in the design
cycle is very, very high.
So that's why it's like a standard norm by every art and designer to get done with these
efforts early in the design cycle.
And how are the latest technologies shaping static verifications impact on the development
cycle today?
And, you know, and what new solutions are available that can enable more efficient RTL technology shaping static verifications impact on the development cycle today?
And what new solutions are available
that can enable more efficient RTL sign-off for designers?
Sure, so static verification space
has innovated a lot over the years,
and the innovation has been along,
more like along three major dimensions,
where we'll connect with the previous question where
the designer aimed to achieve the shift left by the full goal in mind of developing first
time right silicon.
So in these three dimensions that I was referring to, the first one is more like a core foundational
innovation in the static verification space at Synopsys, which has
been to be able to scale to huge billion plus designs because the designs are very, very
huge and complex.
So we need to scale to these billion plus designs.
And while we do so, we also need to ensure that the designer is able to analyze millions
of violations in
a realistic timeframe.
It should not be that the designers take months to go through the long list of violations
or the millions of violations that they end up – they have to analyze.
So as part of this effort, basically in order to meet this ask or meet this demand from the industry, we see Skyglass, which
is the Synopsys static verification platform, is now multi-core, multi-mode enabled, solely
to address the scalability demand from the industry.
And to address the need for designers to be able to assess millions of violations.
We also have a new technology which is pretty much adopted by various customers so far.
So that goes by the name of machine learning based root cause analysis.
What it does is that it allows the designers to be able to debug clusters. Basically, it's a cluster-based verification
where each cluster helps the user
to assess a group of violations.
And these group of violations within a cluster
has a common root cause.
So for designers, when they fix one cluster,
or let's say it is a group of violations,
if they fix one root cause for a cluster,
they are able to fix multiple violations in one go.
So that's how it allows the designers
to achieve 10x debug productivity and sometimes even
higher, especially in the early design development phase.
Coming to the second area or the second dimension, then moving beyond the core foundational innovation
that I was referring to in the first dimension, Synopsys has focused on maximizing the scope
of identifying more and more bugs because that's the core bread and butter
of the technology where the aim is to maximize and identify all possible bugs early in the design
development phase. So what we have done is that there are new applications like RDC and Glitch
also which I was referring to earlier, which
has gained more momentum in terms of usage by the designers.
The designers are now able to actually maximize the state space in the space of bug finding.
So there, for essentially a single Glitch, just wanted to highlight the impact of glitch verification,
where a single glitch can again be deadly for a chip functionality.
And what we have seen is that ultimately when we ask a certain customer to let's say consider glitch verification because of the impact on the overall functionality of the design,
there have been interesting conversations
with designers in this space where when synopsis
used to explain the criticality, at times
because of bandwidth issues or schedule pressures,
at the designer's end, it was overlooked
by a certain set of customers.
But when, unfortunately, they faced some, let's say,
glitch-related silicon bugs.
Then it is then the customer realized the importance of glitch verification,
and they suddenly jumped to the idea of glitch verification and wanted to deploy it right away.
So that's what we have seen in the space of second dimension of the Innovation Fund.
So furthermore, moving to the third dimension of innovation,
we have expanded the RTL sign-off effort
to enable shift left beyond the traditional
RTL sign-off checklist by additional analysis
of new unique issues, leveraging hyperconversions
between static and the digital implementation technologies.
So in this case, the unique issues are the ones that may typically surface later in the
design implementation phase or on the final net list, but could be preemptively addressed
or the checks and balances can be put in place explicitly before the RPL handoff to the implementation
phase.
So in this category of hyperconvergence technologies, we have pioneered two innovations, namely,
that is implementation design checks or IDC and CDC-aware synthesis, which helps in minimizing
the iterations between static sign-offs and the final digital implementation.
Lastly, a quick sneak peek on the latest innovation in the GNI space. We have been working on agent
assisted LINT sign-off as a key disruptor, as a next key disruptor in the innovation
effort or innovation focus from our side as Synopsys.
or innovation focus from our side as Synopsys.
Okay, and can you talk a little bit more about the need for IDCs and how it impacts the RTL sign-off checklist for designers?
Sure. So, implementation design checks or IDC, it helps the Appian designers to ensure that there is
no unintended block of registers that will be optimized later during synthesis.
And the designers get to fix such gaps in the RTL code earlier in the RTL cycle itself.
So historically, what has been happening to address this gap is that the designers are
highlighted such gaps in the code by implementation engineers later,
where the implementation engineers have, let's say, done the manual and intensive debug of
millions of lists of registers by eyeballing the log files from Fusion Compiler or any
other implementation tool.
And this historical effort by the implementation engineers
is actually very error prone and difficult as well.
What this technology or IDC offers
is the precise pinpointing of the root cause
of registered optimization.
And that spans across several layers of hierarchy.
So getting to what needs to be fixed in the RPL code
will become very intuitive for the designers when using IDC,
since it pinpoints to the relevant details.
So it gives all this extra information
to be able to debug faster, which
is in terms of waveforms, schematic, which can span
across multiple layers of hierarchies.
This becomes very useful, especially in the complex scenarios where the user wants to get to the root
cause of what was the reason or what portion of the code actually caused my block of registers to
simply vanish away during synthesis. So a user gets to do this analysis sooner. So in summary, IDC
actually helps in shift left and of detection of these issues and get to
the root cause analysis of these complex scenarios sooner in the design cycle.
Interesting. So how does the new CDC-Aware synthesis flow in VC Spyglass
improve the handling of clock domain crossings and what benefits does it So how does the new CDC-Aware synthesis flow in VC SPYGLAS
improve the handling of clock domain crossings
and what benefits does it bring
to the overall design cycle?
Sure, so historically what we have seen is that
there are error-prone methods such as Arthiol Fragmas
or manual synthesis detectors being used
in the digital implementation phase
so as to be
able to protect different CDC parts from being modified in such a way
that CDC issues crop up in certain parts in the design again. Such pragmas or
actual manual synthesis directives when used by users can lead to two different
problems which are
largely over constraining of the design or sometimes under constraining of the
design in combination with constraints, of course. So what happens is, let's say
then the situation of over constraining is experienced. What we have seen is
that the PPA achieved for the design may be suboptimal.
So essentially, the user could have achieved more PPA, but due to over-constraining, the
final PPA of the design was, let's say, suboptimal. Additionally, the impact of over-constraining
is that the digital implementation tools end up trying harder to resolve these design constraints,
given the over-constraining,
and eventually they end up running longer
than they could have completed the job faster,
if the design was properly constrained.
The other side of this coin is that
the design could be under constraint.
So if the pragmas or the manual synthesis directive
that are provided by the user during the implementation
phase, if that situation, if the number of constraints
that are provided are lesser, then the user
should have been providing.
In that case, it implies that some of the CDC design parts
are not protected.
And there might be a possibility that blitz
gets introduced for such parts during synthesis transformations.
And essentially, it's exposing your design to post-synthesis CDC bugs on the net list.
So that means a CDC was reintroduced during synthesis.
So how CDC-Aware Synthesis helps in this case is that it will help the user to automatically
generate synthesis directives during the CDC verification using VCSpyGlass.
And these comprehensive synthesis directives can be later consumed by Fusion Compiler directly
so that the Fusion Compiler is now fully aware on what kind of transformation are allowed
on which CDC part.
So overall, CDC-Aware synthesis has two core benefits for the designers.
One is to reduce manual burden to define such synthesis detectives, and the other one is
to maximize the PPA for the design, by ensuring that each and every CDC part is protected
and is not impacted by the RTL to necklace transformation.
And how do advancements in VC spyglass technologies
align with the growing demand for faster time to market
and higher design quality in the semiconductor industry?
Sure, so this is in continuation to all the different
innovations that we discussed in the previous questions.
So with the growing demand for faster time to market,
the scope and importance of artyocyanin of metallurgy
have grown multi-fold.
Today, companies want to find all kind of bugs
that they can find before synthesis.
Say for any of the bad coding practices at RTL, over time, let's say, if they realize
that there are these certain bad coding practices at RTL which cause issues when trying to meet
power goals later. So at times people identify these
kind of bad coding practices and they learn that okay it might make sense to
have an additional link check at RTL and that has transformed for us in the
VC Spyglass link space it has transformed in terms of, let's say, power link checks.
So in this case, the user has to ensure
that they enable this power link methodology.
And that will help enable them to fix these power link issues
earlier at RTIA.
So from effort-wise, it is very low.
But the impact on the overall design cycle in terms of meeting
power goals or in terms of achieving high-quality end product or final silicon is pretty high.
So that's one example for the key innovations that we have done in the past.
Similarly, there have been groundbreaking innovations in terms of
multi-core processing, multi-machine processing, and very recently we have
introduced the distributed CVC flows as well, which aims to deliver faster time
to market so that the designers can do their CVC analysis faster. Great. Last question, how do you see AI and machine
learning shaping the future of static verification and what role will VC
spyglass play in this evolution? Sure, Daniel. So I briefly mentioned
about agent-assisted link sign-off earlier. Let's delve into the details of the same.
Sometime last year, we actually introduced Synopsys LINT Advisor, which is powered by
Gen.EI.
Leveraging LINT Advisor, we are working today with key partners to deliver automatic LINT
fixes. So for example, the tool can deliver automatic fix of RPL
for, let's say, thousands of violations in one go,
which can be a very powerful application for designers.
This essentially allows them to skip the part where they go
and address each individual violation for LINQ one by one.
So the power here lies in is that they can fix
thousands of violations in one go
and they just have to quickly review it later
and get done with the LINQ sign-off.
The next phase of LINQ Advisor is actually
a genetic AI workflow for LINQ sign-off
and a proof of concept for this workflow,
which illustrates Synopsys vision for agent-engineer technology, which is a multi-agent
flow for Synopsys, of Synopsys, combined with the Microsoft Discovery platform.
This workflow was actually demonstrated recently at DAC US 2025.
One of the agents within this multi-agent flow
corresponded to the design verification phase.
One of the agents within this multi-agent flow
was the LINT agent or LINT agent engineer
in the grand scheme of overall collaboration
of the workflow that we demonstrated at DAX.
And this phase is actually evolving by the day
and there are interesting times ahead.
It could be, it has the potential of being a big disruptor
for the article designers and even other user personas.
So for our audience today, I would say that stay tuned
for more groundbreaking innovations
in this space.
Great.
Thank you very much for your time, Rupi.
It's a pleasure to meet you.
Same here.
Thank you, Daniel.
That concludes our podcast.
Thank you all for listening and have a great day. Music