SemiWiki.com - Podcast EP312: Approaches to Advance the Use of Non-Volatile Embedded Memory with Dave Eggleston

Episode Date: October 22, 2025

Daniel is joined by Dave Eggleston is senior business development manager at Microchip with a focus on licensing SST SuperFlash technology. Dave’s extensive background in Flash, MRAM, RRAM, and ...storage is built on 30+ years of industry experience. This includes serving as VP of Embedded Memory at GLOBALFOUNDRIES, CEO… Read More

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, my name is Daniel Nenny, founder of SemiWiki, the Open Forum for Semiconductor professionals. Welcome to the Semiconductor Insiders podcast series. My guest today is Dave Eggleston, senior business development manager at Microchip, with a focus on licensing SST Super Flash technology. Dave's extensive background in flash, M-R-R-R-M and storage, is built on 30 plus years of industry experience and more than 25 NVM-related patents. Welcome to the podcast, Dave.
Starting point is 00:00:38 Thanks, Daniel. Pleased to be here. So, Dave, how did you get started in non-volatile memory? Boy, that takes me back quite a while. Coming out of school, I interviewed with AMD and accepted a job in their product engineering team, which was working on e-proms at the time. And that was back in the good old days when things were N-MOS and we were just starting to experiment with C-Moss and densities for flash for standalone chips and not even flash. I'm sorry, e-proms, standalone chips were like 512 kilobits and one megabit.
Starting point is 00:01:14 So we've come a very long way where now we talk about gigabits and very easily and much higher in terms of nonvolta memory. So a long history in the non-volta memory business and really seen it grow from something which was just used for BIOS, for PCs, and now is a very key component across the industry for everything from SSDs, and then also for embedded memory in things like microcontrollers and SOCs. So a long experience there with a number of different technologies, but good old floating gate technology is solid, robust, and still around now in the type of flash that we use at SST. Right, right. I spent part of my career in embedded memories as well. Density certainly is a good measure. But what are the...
Starting point is 00:02:08 Big change. Yeah. What are the scaling path challenges for embedded non-volatile memory technologies in general? You know, and SST Superflash, your product in particular. Sure. So I think for embedded memory, we see embedded non-volta memory. One of the things we see is where is it used, how is it used, and then how do we keep up on the very fast path that logic has gone through for scaling?
Starting point is 00:02:36 You know, we think about logic and people are talking about taping out at five nanometers or developing technology at two nanometers. and we definitely do not see embedded nonvolta memory at those very advanced nodes. But we do see it back in the 28 nanometer or 22 nanometer are places where we have a lot of tapeouts for products. Now those products tend to be leading edge microcontrollers or other types of SOCs with advanced capabilities. So another area which has consumed a lot of embedded non-volta memory are things like smart cards, and those will tend to also be in the 2x nanometer at the most advanced node. When we look at how long it takes, I do have experience from working at Global Foundries for a number of years,
Starting point is 00:03:30 and it takes quite a few years for non-volatile memory to get qualified and production up and running at an advanced node. And that can be anywhere from, you know, three to four years to get something fully characterized qualified up and running. And then at best, we do see that nonvolta memory tends to lag digital logic, anywhere from five to six years in terms of introduction, maybe even longer. So I think what I'm saying is there's two different things. There's both technical challenges in getting embedded NVM in place. And there's a also what's the market? What's the application? And then what's going to drive that into more advanced nodes? So today, you know, the most advanced 32-bit microcontrollers are migrating
Starting point is 00:04:23 from 40 nanometer to 28 nanometer. And that's going to take quite a while before they make that transition. That's a driving application. And then as mentioned, just even the technology development takes considerable time. Now, there's two different types or two different approaches. The traditional approach, which SST follows, with our super flash technology, is a front end of line memory. So you're actually implementing a floating gate flash device down into the transistors, into the silicon itself of the wafer. There's another type, which I've also worked on, both R-R-R-R-R-RM and those are what are considered back-end-of-line embedded flash memories. A little bit different because you're putting the switching material up in the metal layers
Starting point is 00:05:14 back-end of line, but then you use a front-end-of-line standard front-end-of-line transistor to drive that switching material. So again, two different paths. So those are some of the challenges in what is the driving application, how long it takes to implement the technology, and then whether you're implementing front-end of line or back-end of line. Right, yeah. So what about chiplets?
Starting point is 00:05:40 How do chiplets enable an alternate scaling path for SOC designers? Yeah, and that's where we had the recent announcement, which we'll get into, of SST, partnering with a company called DECA technologies to help enable NVM chiplets. And because of some of the issues I described in both whether you're doing a front end of line like Super Flash or back end of line like MRAM or RRM, let's say today, and I have a customer like this, that has a product at 40 or 55 nanometer with embedded NVM, embedded Flash in particular, but their next product they want to do is at 5 nanometer digital logic. Well, that presents a problem for them because they could wait, wind up waiting five, six, ten years for embedded NVM to be available for them. Or they could go with a chiplet solution and basically split the problem and take the things which don't scale very well and do those in an older process node, like a 28 or 40 nanometer process node where those technologies are available. and then pair it as a chiplet with a five nanometer where their digital logic is in a chiplet. And that gives them much faster time to market than they would otherwise get,
Starting point is 00:07:06 waiting around for a technology that is going to take a long time before they'll have that in an advanced digital logic node. Okay. So just net it out, Dave. What are the key benefits of chiplets for SOCs and MCUs? requiring embedded non-volvement memory. Yeah, sure. So if you have that need, and a lot of systems do have that need for code storage or data storage
Starting point is 00:07:33 in embedded NBN, then how do you get that time to market? Chiplets, as I mentioned a minute ago, definitely gives that to you, faster time to market, maybe years sooner in time to market. Plus, it also gives you some other advantages. is one would be the ability to mix and match different process nodes. And I gave you that example of somebody doing their digital logic at 5 nanometer and then maybe doing their embedded NBM at 40 nanometer and bringing those together through chiplets.
Starting point is 00:08:04 Another advantage, though, is you can mix and match foundries if you're using a certain type of advanced assembly technique like we're talking about with DECA technologies and using there. In some cases, you're bound to your foundry on a chipplet solution. The biggest foundry, TSM, likes their customers to do everything through TSMC. But what if you don't want to do that? What if you want to mix and match from Foundry A and Foundry B and take that into an advanced assembly solution that gives you that ability to mix and match? So I have another customer to highlight, which they are in BCD technologies.
Starting point is 00:08:48 And in BCD technologies, you may have very specific needs of a foundry process that, again, might not have embedded NBM technology. So I can do my BCD chiplet at the Foundry of Choice and then do my NBM chiplet somewhere else and bring those together. So in addition to faster time to market, you also get this ability to mix and match foundries and for things like battery management, which turns out to be a very strong need for the BCD technologies, you can mix and match between different boundaries and different nodes. And then finally, you probably also can get, if you're careful about it, you can get lower costs. We tend to think that the lowest cost is always with a fully integrated solution. And if we'd had this chat 10 years ago, I probably would have told you that.
Starting point is 00:09:48 However, we do see with advanced assembly techniques that we're able to get lower costs by taping out those things that, like I said, don't scale very well in an older node. both the design costs, the masking cost, the wafer cost, die costs are lower. And as long as we have a low enough assembly cost that can handle die sizes that are dissimilar dye sizes and give us that flexibility. And that can be a really good choice. So to recap, faster time to market, mix and match between foundries. And if you're careful and thoughtful about it, lower cost.
Starting point is 00:10:31 Yeah, that's the whole thing about chiplets. That's great. So you mentioned the partnership with Deca. So why did SST decide to partner with Deca and what capabilities do they add? Yes, we looked around and SST is owned by Microchip. And Microchip is one of the leading microcontroller companies. And we're thinking about, well, is there a possibility by 2030 that microcontrollers and other similar SOCs, are they going to to be built with chiplets and what we call, you know, disaggregated MCU or disaggregated system.
Starting point is 00:11:08 And in order to enable that, I mentioned one of the complications already is you might have two different dye sizes and they're dissimilar. And if you had, let me back up for a minute, if you had one where you have the same die size between, let's say, your 5 nanometer wafer and your dye on your 40 nanometer wafer, well, wafer to wafer bonding is a pretty good technique, and you can put those two together. But that's pretty limiting, because if I have to match the dye sizes between the two, maybe I'm wasting area, I'm not optimizing. Or what if I had radically different dye sizes, one's big and one's small?
Starting point is 00:11:51 So we went and looked around at assembly techniques that could allow us. to have these two dissimilar dye sizes. Now several of them required an expensive substrate. And we wanted to come up with something, how can we eliminate that cost? And so what DECA has is the ability to connect two dye, and they can be dissimilar die sizes. Now this could be done in a 2D or a 3D type of arrangement,
Starting point is 00:12:19 either as possible, but to have maybe 1,000 interconnect. And in their case, they use an RDL redistribution layer to connect between the two dye. And we eliminate the substrate because the dye are actually placed side by side, and we use the molding compound itself to hold the dye in place, and then deposit through RDL, deposit the connections between the two dye, making up, you know, a chiplet, which now has two dye connected between them. And when we look at doing that, we're looking for something that the cost, the advanced assembly cost of this type of processing that Decker calls the advanced patterning
Starting point is 00:13:09 and with the redistribution layer, that's, you're not really talking about like single digit sense of cost. So very low cost as a way to, connect the two. And as I mentioned, the ability to handle both a big and a small die with, let's say, a thousand interconnect. So Dave, we've been talking about chiplets for a while. When do you expect embedded non-volvember memory chiplets to become productized? And which end markets are going to be first? Yeah, it's a good question. And I attended the chiplets I went, which will be coming up again early next year. And I attended a few years ago, and the guys from your old
Starting point is 00:13:50 research had some very interesting numbers to point to the different markets. We're pretty aware that things like server processors and even PC processors have been built by pioneered by AMD and now also Intel in a chiplet format. So this is really common in kind of those high margin areas. But things that markets which haven't embraced chiplets yet are things like IoT or automotive, which is where microcontrollers go. And what we're really trying to prepare for is be ready by 2030 that this disaggregation is going to become much more common. So I do see it's going to be a couple more years. We're here in 2025. So I think it's going to be, see initial products in probably two to three years. And the end markets, I do expect to be
Starting point is 00:14:46 these kind of iot and then also automotive markets very interesting both in japan and germany there is formation of consortium to look at doing triplets for automotive and i know it's been tried in the past and the automotive industry has largely stayed with fully integrated devices but the benefits are becoming so strong for automotive to look at mix and matching faster time to market lower cost, but they're seriously considering now using chiplets for automotive applications. And I think these consortia will be helpful in paving that way. Yeah, I agree completely. That sounds about right. So what is the overlap between shiplets and AI integration at the edge? I mean, AI is coming, right? So what do you see?
Starting point is 00:15:38 AI is obviously the buzzword. There's no doubt about that. So in the case of adding AI to, and again, think of it as microcontroller systems, we're adding capabilities often around sound processing or vision processing. Let's use vision because I think that's probably the best example. We do have customers that have started with the audio processing first, but rapidly move into video. And I'll talk very briefly about how you would use that at the edge. It might be in a camera for threat detection,
Starting point is 00:16:14 a security camera. It could be for navigation of a drone where your size, weight, or power constrain. It could be for other types of vehicles, even automobiles, in terms of having vision capability and doing navigation and avoiding collisions. So those are some of the applications.
Starting point is 00:16:35 Now, if you think about it, we do know that the model sizes in the data center, this is handling billions, tens of billions, hundred billion parameters. At the edge, when we're focusing on a specific application, like Ford Vision, you know, this might be about 100 million parameters that you have to store an inference on. So when you do that, you have a certain capability that you'd want to build in. But the model growth from year to year has been almost an order of magnitude every single year. So if you think about it, if I wanted to add this AI inferencing capability at the edge,
Starting point is 00:17:19 I'd have a certain product today. And that takes me quite a few years to design, develop, get to market, et cetera. But I would want the ability to add on more NVM and more capability to handle those parameters in an extensible way to continue my product line. Plus, I might have something where lower quality vision is okay for one application and higher quality vision is needed, AI vision is needed for another application. Well, if I could just add a chiplet, which enables that additional parameter handling, that becomes very powerful. And now I can have something which a single chiplet handles the lowest end needs of my application. And then I add additional NVM chiplets to it as time goes by, to, as I mentioned, add more parameters,
Starting point is 00:18:15 add more capability. So that's one way that these two definitely overlap. Again, driven by the model size growth, the complexity, and then also one of the great things about chiplets and AI. AI were definitely in experimentation stage at the edge. So I'd want to have that faster time to market, I'd want to have my lower cost for my tapeout is I really kind of find the key markets that are going to drive my products ahead. And I think chiplets really enable that experimentation. So I think all those factors together, we see that AI as an application and adding it to microcontroller types of systems, chiplets give that big benefit and flexibility.
Starting point is 00:19:05 So, Dave, I see you guys at all the conferences you make a lot of investment in the ecosystem. How do customers normally engage with you, folks? Yeah, good question. So for as SST, and that's our primary branding that we're out there with Super Flash, and we can be accessed, whether at a conference, as you mentioned, or also through our website, you can contact us directly, and we can look at what your customer needs are. As we've talked about chiplets, we've developed solutions
Starting point is 00:19:40 which can help customers with their simulation needs and then where they would find the advanced assembly that would bring these pieces together. So reach out to us directly. Again, contact information can be found through the website and we'd be happy to help on NVM needs or, of course, the chipplet needs that we're talking about here. Great. Well, Dave, thanks for your time, and I will see what the Chiplets Summit.
Starting point is 00:20:08 Sounds good. Thanks so much, Daniel. Appreciate the opportunity to talk. That concludes our podcast. Thank you all for listening, and have a great day.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.