SemiWiki.com - Video EP10: An Overview of Mach42’s AI Platform with Brett Lauder

Episode Date: September 19, 2025

In this episode of the Semiconductor Insiders video series, Dan is joined by Brett Larder, co-founder and CTO at March42. Brett explains what March42’s AI technology can do and the benefits of u...sing the platform to quickly analyze designs to find areas that may be out of spec and require more work. He describes the way Mach42… Read More

Transcript
Discussion (0)
Starting point is 00:00:00 My guest today is Brett Larder, co-founder and CTO at Mach 42. Brett, over the last four years, Mach 42 has been engaged with many of the big players in analog design. What are the big challenges you've seen designers facing across the industry? So the short answer is simulation time, which is only increasing with a growing complexity of modern designs. Single simulation can potentially take hours to run. So you're facing either running a small number of simulations or waiting a very long time and hoping that there is a growing results are what you expected. The problem might be that each individual simulation can take a long time to run, if it's a complicated design,
Starting point is 00:00:37 or it might just be that the size of parameters of space you need to explore is very large. You don't have very many PVT corners, design parameters or mode selectors, before you end up needing thousands or hundreds of thousands of simulations to explore them all, which generally isn't feasible, at least not in a reasonable timeframe. So we see this at all stages of design process, whether it's during the initial data, design phase during sign-off, system verification, or when evaluating existing IP for reuse. And this means that among designers
Starting point is 00:01:10 are constantly having to make compromises, whether it's using Cuban judgment to reduce the corners and parameters they look at, or simulating with simple models. But one of these compromises comes with the risk of missing critical failure modes. Right. Yeah, and as designs get more complex,
Starting point is 00:01:28 simulations become more difficult. So how does Mach 42 speed things up? So the approach we take is to train machine learning model on a relatively low number of simulations across your parameter space, generally from a few hundred to a few thousand. We can then use this model to instantly protect the simulation results
Starting point is 00:01:45 for any combination of parameters. So it's trivial to sample hundreds of thousands of parameters and see how design behaves across the entire parameter space. And as an our designer, you can get instant feedback on how varying different parameters will affect your design. Right.
Starting point is 00:02:03 And how can I trust the results of a machine learning model, especially for critical cases like verification? Right, so I think you're right to be skeptical. While we tend to see very high accuracy in our models, the machine learning model is ultimately still just an approximation to the true simulation result. And one way we approach this question is by having our models return an uncertainty in their prediction.
Starting point is 00:02:27 So you can see where the models are confident in their predictions, and where there's more uncertainty. And the uncertainty might be higher in some areas of parameter space or in some region of the waveform being modeled. And if we see and says the in certain areas of parameter space, that lets us intelligent gather more simulation data there to prove the model's confidence.
Starting point is 00:02:46 And if we're looking for particular behavior, for example, parameters where the design is out of spec, then we can run simulations to verify what the emulator is predicting. So we get all the benefits for exact simulations, but with the emulator, allowing very rapid exploration of the entire space to guide the efficient use for a much smaller number of simulations. So how can an AI designer actually use these AI models in the platform?
Starting point is 00:03:12 So our discovery platform orchestrates the entire flow for you. You can upload your test benches and designs and we'll extract the parameters that you can vary. You can then pick the parameter ranges, values or corners you want to explore, and then we'll start running simulations to gather suitable training data. training data. You can connect our work at clients your own infrastructure so that your spice simulators can be running with access to all your libraries and without having to share your IP with us. The platform then uses a small number of simulations to build machine learning model of the entire parameter space. You can define some behavior to search
Starting point is 00:03:47 parameters space for, for example conditions where your device goes out of spec. And the platform can automatically search the parameter space in detail using the machine learning model to find areas where design looks like it's likely to violate the spec. We can then confirm this by automatically running some additional simulations in these areas to verify which parameters cause problems. So the discovery platform automatically uses our machine learning models to rapidly search a parameter space while efficiently using a select number of simulations to verify results and gather more targeted training data. Altogether, this lets you examine a larger parameter space more comprehensively, but still with a confidence that an accurate simulation
Starting point is 00:04:26 provides what are the use cases you've seen for this product thus far so we've seen two concrete examples where discovery platform can really help speed things up the first is in IP reuse where designers would like to use an existing bit of IP for a new use case possibly with different requirements that have never been tested before so by training a library models for your existing IP it becomes very quick to evaluate all your existing designs against your new requirements and you can use the emulator to quickly evaluate suitability and then automatically run simulations to confirm the designs behaviors. The other use case is during design process itself, where a designer will have a suite of tests and corners that they need to run each test against.
Starting point is 00:05:14 As a design evolves, these test suites are constantly run to check for change of behavior and regressions. And these tests are time machine to run, even with the primates of space, already cut down to a minimal set using human judgment. But by using the discovery platform emulators, designers can quickly evaluate the design across the entire private space and check hundreds of thousands of values with only a few hundred or possibly thousand simulations. Great. So final question, Brett.
Starting point is 00:05:44 How do customers normally engage with you? I mean, your website is Mach42.a.i. What does the customer engagement look like? So yeah, the process is essentially to get in touch of us via the channels on the website and we can set up an evaluation and we'll talk to you and your designers kind of delve in some of the issues you're facing when it comes to verification time, simulation time, and we can find a good example case where we can run you through our product and see what savings look like in terms of simulation time and how accurate we can get an emulated for you.
Starting point is 00:06:20 Great. Excellent conversation, Brett. Thank you for your time. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.