SemiWiki.com - Video EP12: How Mach42 is Changing Analog Verification with Antun Domic
Episode Date: November 21, 2025In this episode of the Semiconductor Insiders video series, Dan is joined by Antun Domic, who discusses Mach42’s work on AI and analog verification. Antun covers many aspects of analog/AMS verif...ication and how Mach42’s unique AI-fueled approach provides significant benefits. He explains the balance of speed … Read More
Transcript
Discussion (0)
Hello, my name is Daniel Neni, the founder of Semiwiki, the Open Forum for Semiconductor
Professionals. Welcome to the Semiconductor Insiders video series, where we take 10 minutes to discuss
leading-edge semiconductor design challenges with industry experts.
My guest today is Anton Domek, and we're talking about AI and analog verification.
Anton, what do you see is the biggest unresolved challenges in analog verification today?
Okay, well, thank you very much, Daniel, a pleasure to be here.
First, you can see if you examine these numbers,
that we have a serious issue with the analog mix signal verification,
becoming a bottleneck in the design process.
Chips are failing because of not being able to properly verify
the digital mix signal behavior.
you have to rely extremely heavily on circuit simulation that has made enormous progress,
but everybody knows for large circuits is not very fast.
And we're seeing the issue of wrist pins.
So if you look at two problems, number one, you know, the verification of the analog circuit itself,
and the second part is the verification in the context of the larger
check, meaning what we say, mixed signal verification.
Run times, accuracy, difficulty in creating models
are the big issues here.
Where do you think AI can meaningfully impact
analog verification?
There are a number of areas here.
I'll talk more about what we're trying to do at Mach 42.
First, if you look at circuit simulation,
Obviously, it's one of the oldest areas in EDA.
Many of you may remember the Berkeley spice distribution
some time ago, by the way.
And the issue of accelerating circuit simulation
is something that is constantly being addressed.
And every time you look at the tools
and you are using better numerical methods,
you are parallelizing.
and so on, but at the end of the day, nobody has done something dramatically different.
So what you can do with AI is not to replace the numerical methods and the techniques for accelerating,
but to try to look at a different approach.
AI is very well modeling certain phenomena and that's what we're trying to do.
And that's sort of the core of the issue.
Instead of going the more traditional route,
let's accelerate the circuit simulation,
let's simplify the generation of models.
Let's try to look if we can create a different model
that complements circuit simulation,
but allow us to have a significant time savings.
And as you mentioned, you're an advisor on
Mach 42's Semiconductor Advisory Board.
In your opinion, how does Mach 42 tackle these challenges differently from traditional EDA methods?
Yeah, very, very good.
A science, I'll elaborate.
If you try to apply these new techniques, for example, what we're trying to do with neural
networks, is that you can create a neural network whose behavior approximates very well
a different system. You take a circuit, you have the transistor level net list, you run that in a circuit simulator.
But using some of that data, we're able to automatically create a totally different model that is a neural network,
where there are a number of layers with specific intelligence.
Obviously, if you are in circuit simulation, you're not going to be able to avoid differential equations.
So we have a layer that addresses is some solving of differential equations.
You have another layer that model specific components, a resistor, capacitor, and so on.
And you have different resolution layers as you go through.
This is, this executes with on GPUs actually and from that perspective is different.
And as I say, we create a,
a model that approximates your circuit, it talks like a surrogate model in an extremely accurate fashion.
So once you have that, you can do extra analysis of your circuit.
For example, you can vary some parameters and the results appear practically instantaneously.
So that's the part that we say is very different than in using some sense.
a completely different approach and as I said we're not eliminating simulation we are
complementing in a way that can make much more exhaustive analysis of your circuit
that if all that you have at your disposal was less accelerate simulation let's
use more computers and so on right you know I saw the software at the
design automation conference earlier this year
You know, Mach 42's discovery platform claims minutes to explore broad design spaces and is now integrated with industry standard tools like Caden Specter and such.
In practice, how do you balance speed-ups without compromising accuracy?
You know, that's always the challenge with simulation, right?
That's a very good question.
And also, we address it in several ways.
Number one is that when we do sampling of this, the data that we need to see, what we need to simulate, we do a very smart sampling.
So we're not just rushing to do a large number of simulations. We can do very clever analysis and that way minimize the amount of circuit simulation that we need to achieve a certain level of accuracy.
Of course, if you are accuracy that you are requesting is extremely high 99. something percent,
of course, we're going to have to run a good number of simulations to be able to train the system.
If an accuracy request is good, but more reasonable meaning high 90s,
we can do it in an intelligent way, creating models of that accuracy,
with much less simulation, of course, that that is the point.
And then with that amount of training,
we're able to follow your instructions to do, for example,
variety this parameter, produce this output,
and that can be done practically, as I say, in no time,
the waveforms can be displayed for you.
And the integration, obviously, if you follow a little,
little bit what we're saying, even though we haven't been the most precise, we look at the input-output situation of the simulator, we're not getting into the insides of the circuit simulator, so that that way we are easy to integrate into the current flows.
From that perspective, that allows us to use literally any simulator from the industry, we're going to use,
have concentrated cadence and as we have been very accommodating in allowing this the integration
that were part of their connections program you can they can see how a the complementary nature
of the discovery platform and specter can bring a big benefit to their customers.
We also can take our model that now is very different. It's a neural network as opposed to a
a transistor level net list and generate implementations of that model in different formats.
Very log A being the most popular in these cases, but we can do others.
And that allows you to integrate this analog piece of your circuit into the larger context
for verification of a mixed signal chip design.
Oh, that's interesting.
So how does Mach 42's AI-driven approach blur the line between analog design and verification?
You know, it's always been a challenge.
So what makes that possible now?
I think it's a different technique instead of, as my apologies, I've been a bit repetitive,
instead of doing a fairly large number of simulations, the fact that with a much smaller
number we can emulate a large number of parameters variations and so on. It makes it
different when not we're not solving the differential equations of every
transistor in your circuit while looking at what we know of the behavior of the
circuit and creating a neural network that emulates that behavior.
So looking ahead, how is AI-driven verification validated and how has
Mach 42 demonstrated real traction in the industry so far?
Well, we are in several evaluations with a large semiconductors, companies,
and the number one, everybody that you talk to that does analog design and has to integrate
pieces that are analog into a larger context, let's say a big digital chip, recognizes very
clearly the difficulty of this.
not only run times, generating very low-game models
that can be efficient in simulation is not easy at all.
You need an expert that understands the circuit,
that can see the delicacy of some operations.
So we have had no problem in getting agreement
when we explain the problem and the situation to customers
to get a huge,
huge interest from that perspective. From there, now you have to work with people, deal with
their specific circuits and so on. And that's where we are at this moment in that phase.
As I said, the automation of analog design, you're very aware, Daniel, lacks
significantly the automation in digital, in the digital domain. If you're
you look at static timing, for example, it replaced a lot of simulations because you could have
something infinitely more exhaustive searching for critical paths. In the analog case, this
coverage, if you think, has been based on the cleverness of the analog designers and that's
very difficult and as the circuits get more complicated, it gets even harder. So,
From that perspective, we think that different approaches could bring real benefits in such a difficult area.
So, as Daniel, you were asking, you know, about our attraction, approaches to validation.
Number one, as I said, the industry feedback, the reception that we have on our approach to the problem has been excellent,
integration with popular spices has significantly concerns regarding design flows and so.
And what you can see here are a few numbers. For example, everybody knows that automotive
chips require a lot of validation on multiple corners. So for example, with training
times of less than a day, and we are able to run to create models,
models with 90 plus percent accuracy that in those cases is perfectly okay.
Other areas, we need more runs because, for example, but they could be smaller circuits,
we are able to achieve 99.9 on some LDOs.
Loads, for example, people would like to see the behavior of this under different loads,
meaning RLC loads on a board, and there we can
do very well and in a few hours we can create the models.
So I hope this gives you some view that the training models is done on some GPUs.
We don't need a full farm or a power plant to, you know, be able to run the training.
These are very reasonable and we have been very careful in ensuring that the amount of
consumer resources is reasonable here.
Thank you very much Anton. It's great speaking with you again and I appreciate your time.
Thank you very much, Daniel. A pleasure to talk to you.
That concludes our video. Thank you for watching and have a nice day.
