SemiWiki.com - Video EP9: How Cycuity Enables Comprehensive Security Coverage with John Elliott
Episode Date: August 15, 2025In this episode of the Semiconductor Insiders video series, Dan is joined by John Elliott, security applications engineer from Cycuity. With 35 years of EDA experience, John’s current focus is on se...curity assurance of hardware designs. John explains the importance of security coverage in the new global marketplace. He describes… Read More
Transcript
Discussion (0)
Hello, my name is Daniel Neni, the founder of Semiwiki, the Open Forum for Semiconductor
Professionals. Welcome to the Semiconductor Insiders video series, where we take 10 minutes to discuss
leading-edge semiconductor design challenges with industry experts.
My guest today is John Elliott, security applications engineer from Security. With 35 years
of EDA experience, John's current focus is on security assurance of hardware designs. Welcome
to Semiconductor Insiders, John.
Glad to be here.
First question, what is security coverage and why is it important?
So with the rise of hardware vulnerabilities, there's an increase need to ensure that
hardware designs are resilient against potential threats. And without a systematic approach,
you risk vulnerabilities being unaddressed. And that can lead to significant
financial, reputational, or even operational loss once a system or intellectual property
or an IC reaches the market. So it's critical to be able to demonstrate evidence of security
validation to your external stakeholders, and that includes not only your customers, but
regulators and auditors. And the evidence that you provide demonstrates compliance and
accountability and it serves as concrete evidence of your organization's commitment to strong
security practices. For example, adherence to industry standards such as ISO214-34 for automotive
or NIST frameworks, it becomes more credible when you can support your reports with
detailed, actionable reports that align with the
security guidelines and accomplishing this requires rigorously verifying and
measuring the effectiveness of security functionality and protections across
the entire pre-silicon development cycle I agree completely so how do you
verify hardware security features I guess that's a big question it is and it
the verification of hardware security features it's a critical step
in the hardware verification process.
But it's important to recognize that there are really two aspects to this process.
There's the functional security verification, and then there's the security protection
verification.
So traditional verification methods can be applied to many of these activities, and this
could include formal verification, directed tests, system varieg assertions.
And a test for this simple example that's showing on the slide here can validate a requirement such as when the key is requested by the AES block with an encryption block, the key value in the security fuse appears at the boundary of the AES block within some time period, like 100 clock cycles.
So that's, you know, a basic question of functionality is, does the key get to the right place when it's requested?
But security protection verification, it's a different class of securing of challenges.
So this is validating the adherence to security requirements, and this is more focused on unexpected or unintended
behavior. And the process involves identifying the assets in the design, such as keys and the
like, and the security requirements for those assets. So let's consider again the example on the left,
but security protection verification may pose a slightly different question, such as can the
key ever leave the chip boundary? So note the distinction between these two different verification
methods and the difference involves scope so in general functional security verification is more
limited in the time and space but the security protection verification is asking more open-ended
questions and some of the requirements they could be influenced by larger portions of the design
and usually over longer simulation periods or emulation for example
Right. So how is security verification applied to the design lifecycle?
So really, the hardware security verification, it spans the entire life cycle. So it begins with the identification of the identification of the
assets in the design, as I mentioned in the previous slide, and the requirements for those particular assets.
and both of these activities commence at the start of the design process.
So for individual blocks or smaller portions of the design,
you might develop system varilog insertions or maybe utilize formal techniques,
but when you have more complex requirements or you're verifying in the context of a large system,
then you would generate Radix roles.
So Redix is the tool from security, and it helps answer
those more open-ended security protection verification questions.
So this is the RADX rules.
They're complementary to the other verification approaches.
But anything that's developed at the block level,
you can then move that up to the system level
as you move through the verification flow.
And this then brings us to the hardware security coverage.
metrics. All right. So what is a hardware security coverage metric, John? So the metrics for security
coverage, these are analogous to functional coverage metrics and that they do provide measurement
as to how well verification activities have exercised your particular verification goals. So security
coverage metrics focus on how well the security requirements associated with each asset have
been scrutinized. So, for example, let's consider a very general security requirement that says
a particular secure asset should never be exposed outside a specific block in the design.
And if you think about this in terms of RTL, you could state this as the contents of some
register, let's call it key, should never reach the output.
of the module AES.
And when taking that requirement, if you consider what's in the RTL, the RTL will include
some kind of protection mechanism or mitigation that ensures that particular security rule
is never violated.
So this protection mechanism, it defines a protection boundary beyond which that secure asset
shouldn't propagate.
So what this means is the information flow from the source, so basically the information from that key, the secure asset, can flow up to the protection boundary, and that's expected.
And if the information flow does not reach the destination, then the security rule has not been violated, and that's your goal.
But you want to measure how well you do that.
And it's validated by applying tests that influence the blocks and not only low-level tests, but up to system-level tests.
And the security coverage gives you insight into how well the protection mechanism has been exercised.
So the protection boundary, you could think of it as a wall and its purpose is to stop unintended flow of information.
But you're trying to answer the question, you know, how much of that wall has been checked for potential weaknesses.
So individual tests that are applied to design, you know, that's the blue region is what could be tested.
What you've actually tested is in green.
And then the security coverage metric is the ratio of those.
So in other words, this measures how well the regions where that key or the information could flow have actually been tested.
But how do you measure security coverage?
So with the security coverage, it's measured by running multiple tests that exercise the security
feature that you're focused on, and then you aggregate the information about how well
that security feature was exercised.
So you're probably not covering all of this in one test.
It's going to be multiple tests.
You merge that information together, and you have kept.
an overall security coverage metric.
That's done in the simulator.
We then, in Radix, we aggregate that database
to produce a security coverage database,
and then that security coverage result
can be viewed inside the Radix GUI.
And then based on thresholds, you set,
whatever your goals are,
you can very quickly hone in on any problematic instances
or signals, and these, of course, can be cross-probed into a schematic view or an RTL view to help
you debug what's going on. And also, you can generate reports. So if you think about
evidence and the like, having the reports provides these additional evidence of security validation.
Right. Final question, John. How are these security results interpreted?
It's like functional coverage in the analysis of security coverage.
It's an iterative process.
So if you look at your results and you have low coverage,
there are a few possible causes.
One may be that you actually didn't specify that protection boundary,
that wall correctly, and which case you go back and iterate on that.
Of course, there could be an error in the RTL code.
But if it turns out you've misinterpreted that protection boundary or mitigation code, you can quickly refine that, reanalyze these results.
And note that the goal here is to ensure that your information is flowing up to but not pass that boundary.
You want to make sure that you really tried to break the protection boundary.
tests that it's that it's ultimately secure and then you know so another causes is of course just
like functional it could be insufficient testing so if you look at the the uncovered logic in the
rate excruly that can point you to places where additional tests need to be added to increase your
coverage your security coverage number and then of course the goal is high security coverage and
And once that's achieved, then you can evaluate whether you've reached your sign-off criteria.
And if that's approved, then you can collect the evidence, you know, capture and collate that evidence and provide that as part of your design process.
So security coverage, it's a key component of a systematic framework.
for comprehensive traceable security verification so it demonstrates how well the security rules
and mechanisms have been exercised and this also drives informed security sign-off
by offering some visibility into the testing of the security features and any coverage gaps
requirement and reports, they demonstrate evidence of security to the external stakeholders,
including customers, regulators, and auditors, and it demonstrates compliance and accountability.
So it provides concrete evidence of an organizational commitment to strong security practices.
And by quantifying the coverage security components, teams can identify and understand any security
gaps they have early in the design phase, which enables them to implement mitigations efficiently
and cost-effectively. So this approach not only ensures compliance within the industry standards,
but also boosts customer confidence in the final product.
Great conversation, John. Thank you for your time. You're welcome and appreciate
you're having me. That concludes our video. Thank you for watching and have a nice
day.