In The Arena by TechArena - The 6G Future with Muriel Medard

Episode Date: February 27, 2023

TechArena Host Allyson Klein discusses the imperatives for 6G as we head deep into standards definition and the need for collaboration between industry and academia on future standards with MIT’s Mu...riel Medard.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Alison Klein. Now, let's step into the arena. Welcome to the Tech Arena. My name is Alison Klein, and we're coming to you from Mobile World Congress. I'm so delighted to have Muriel Madar on the show with us. She is a professor at MIT and a chief scientist and a number of other things. I'll let her introduce
Starting point is 00:00:41 herself. Welcome to the program, Muriel. Thank you, Allison. Thanks for having me on. Muriel, why don't you go ahead and just provide some background for the audience on your work at MIT and how that stems into a role that transcends academia and industry. Thank you. Yes, so my title is the NEC Professor of Software Science and Engineering for the School of Engineering. That's the chair I hold. I'm a member of the Electrical Engineering Computer Science Department at MIT, and my research takes place in the research lab for electronics. I lead the network coding and reliable communications. My work, as the name of my group indicates, has been around making networks reliable, efficient, timely, robust,
Starting point is 00:01:35 and secure. And that work, which generally has always taken its inception on the mathematical side. I am, by training, a card-carrying information theorist. I'm actually currently the editor-in-chief of the IEEE Transactions on Information Theory, and former president of the Information Theory Society. My work, which usually starts always on the theoretical side, I always try to bring to the engineering, to the praxis for our field. And that's how I have found myself, as you said, transcending academia and really pushing into tech transfer, into trying to make products better. I'll give you two examples, which I think are the two most salient ones you've mentioned that i have a chief scientist role i serve as chief scientist of a company that i've co-founded steinwolf
Starting point is 00:02:33 based out of alborg denmark and that company is the one that is implementing and really pushing into real networks currently, network coding. Network coding is a technique for making up for losses or delays in networks. And it's really a key technology for ultra-reliable low latency for the sort of very responsive and reliable systems that we are trying to all get to work right now. So we work with many companies such as Barracuda, CreativePoint, and others to make these reliable networks and timely networks much more efficient and much higher performing. I'll give you a second example, and then I'll stop and take questions. Recently, along with my collaborator, Ken Duffy, who's a faculty at Northeastern University, and with my collaborator, Rabia Zizigil, who's a faculty at Boston University,
Starting point is 00:03:36 we have put out a chip which will actually be presented the week before MWC. So it'll be just fresh off the presses, which actually breaks the picajoules per bit, the energy per bit record in terms of decoding for this time, not time for erasure correction, but rather error correction, the types of errors that happen, particularly in wireless systems, because of noise, interference, deleterious effects from the physics of channels.
Starting point is 00:04:09 We have a chip that not only breaks the record, but also actually does so without being dedicated to a single algorithm, a single encoding system. So you know, at MWC, people talk a lot about standards. There's a whole lot of discussion in standards about what error-correcting codes should be used. And I'm sure many of your listeners have followed the types of heated debates that occurred in 5G around the choice of codes. One of the codes that was chosen was low-density parity check codes, which were actually invented in the early 60s by my thesis advisor at MIT, Bob Gallagher. The codes that were chosen for low latency, particularly in controls, were CRCA-ed, also called CA polar codes. Polar codes were invented
Starting point is 00:04:58 by another student of Gallagher, my friend and colleague, Erdal Arakan of Bilkent University, it turns out that actually with the new approach that we have developed, which we call GRAND, which stands for Guessing Random Additive Noise Decoding, we can be noise-centric rather than code-centric, and we're able to decode any low to moderate redundancy code. And it turns out that pretty much all codes do the same. So, you know, what may be of interest is that, for instance, maybe we don't need to standardize anymore and have those heated debates where everybody ends up upset and we can all stay friends. That's amazing. And I am going to ask you some more questions about that chip,
Starting point is 00:05:37 because that's very interesting. But I want to start the questions with just a statement of, I don't remember, I've been in the tech industry for over two decades. I don't remember a moment which seems so fresh with possibility in innovation. And I think that when you look at the advancement of AI and the amount of attention it's getting right now, when you look at the continued evolution of cloud architectures and what they're enabling us to do from a standpoint of automating workloads and the advance of edge and what that means in terms of being able to deliver types of compute directly to where data is being created. We seem to be on the cusp of an amazing moment for technology and networks seem to be at the crux of everything in terms of making all of that happen. Can you provide your view on where you think we are from a standpoint of technology and where we're going and what makes you excited about the innovation that's being driven today.
Starting point is 00:06:47 Yes, that's a very good point, Alison. So let me start by agreeing with you that I think this is truly a period of time, which I think is unique, is fantastic for our field, if we make good use of it in terms of, as you said, full of promise and excitement and new possibilities, which were probably only hoped for, not even guessed at, a decade ago. I think that the aspect around computing and networking is crucial. I mean, we talk about a computer, but if our computer is not linked to anything, we wouldn't find it very useful. We talk about a phone, but of course, we're using it really as a computer. So, you know, we talk a talk about a phone, but of course, we're using it really as a
Starting point is 00:07:25 computer. So, you know, we talk a lot about computer science, but really, it should be more about networking. So the network and the computing aspects are no longer different. They're really just a single service where we cannot well tease apart what's compute and what's transmit. So I think that that promise is huge. In the context of, you know, what can be done, for instance, around machine learning, I think that it's crucial that in the next generation, particularly in 6G, we have an approach which allows innovation. And what I mean by that is that having a monolithic, very ribed, by committee, single architecture with maybe just one or two options at the different layers is not going to allow full innovation. And machine learning is about optimization. It's a way to do optimization. In order to optimize, you need to have enough choice, enough parameters to optimize. If I only
Starting point is 00:08:33 have choice A and choice B, I don't need machine learning. I don't have that many choices. There's not much I can do about anything. And I think that it's very important for us to have, of course, we need reliability. Going back to what I said at the beginning of the podcast, that's the whole nexus of my work and my research philosophy. But that reliability doesn't mean we need a single monolithic top to bottom stack. It means that we need modularity. We need well-defined, reliable API. I'll go back to what I mentioned about GRAND, for instance.
Starting point is 00:09:08 There's an example where if we don't need to define the codes anymore, standards, because, you know, first of all, almost all codes are just about the same. Once they're optimally decoded and can do that optimal decoding rather than using, let's say, old-fashioned legacy technologies, then now we can be more modular. We can do more things. We can take into account the characteristics of the channel and do more interesting things. We need that new modular thinking in order to actually capture the moment. Otherwise, we can end up with something which is very cumbersome, inefficient, and really misses all the opportunities. So I think I am, as you said, very excited, but I'm also a little trepidatious that not miss this opportunity. And again, going back to the aspects of machine learning, you have to be able to optimize over something.
Starting point is 00:10:08 And the second thing is not everything needs to be done by machine learning. There are problems which do extremely well when they're tackled that way. There are problems for which machine learning is not necessary and if anything can be overly onerous and energy inefficient. So using it, but using it with intelligence, with judgment, with discernment is very important and providing that flexibility, that modularity so that you can use it with that discernment, intelligence. The opportunity to create more of a framework rather than a codebook, if I can speak from who is not writing the specs for 6G, is that because you want to give
Starting point is 00:10:53 those engineers that are deploying solutions more agility in their designs? Is it because you think there may be technology that's invented after the spec is created that could then be flowed into how 6G will work? What is so important about this need for you from a standpoint of modularity? So I think it's both. It's not just what will be created down the road. It's even things that have already been created and are currently not able to be deployed because of the rigid and fixed approach to standards. Standards, of course, are necessary,
Starting point is 00:11:33 but they're only necessary insofar as they define how one part of the system talks to another. So, you know, we're talking right now over the internet and there are some standards around how things are done and there's a variety of different technologies involved in the connection between you and me right now. And really all you need is to have effectively APIs to allow those different components to talk to each other. So it's both because we have right now a situation where we have massive inefficiencies already, which are leading to, you know, just to be concrete about what I mean by inefficiencies,
Starting point is 00:12:17 we've seen the cost of new 5G spectrum. How much of that cost is being really driven by these inefficiencies? A huge amount of it. Now, you've mentioned also future improvements. Of course, having something that is modular, is future-proof, will allow new developments, but will also allow customization. You mentioned ML. If I'm trying to do optimizations because I'm trying to do an optimization which is as relevant and as bespoke as possible, in order to do that, I need some level of modularity because what am I optimizing over if I have a rigid prescribed stack? Nothing, right? Or I'm optimizing over very little in a really post hoc, clumsy, inefficient, and complicated.
Starting point is 00:13:16 When you look at the rollout of 5G and where we are, I know that your talk at Mobile World is about network slicing and the ability to allocate network capability based on the workload requirement is one of the benefits of network slicing. Where do you think the industry is today with delivering the full vision of 5G? And what do we learn from the process of standard creation to solution deployment that can inform how we look at the challenge of defining and moving forward with 6G? Yeah, so I think that that's really the key question that you're asking, you know, what have we done? What have we learned? I think 5G has been
Starting point is 00:14:05 mixed. You know, there's been some really significant leaps forward. I would think, for instance, using higher frequencies, really bringing massive MIMO online. It has also, in many cases, remained very similar to 4G. Sometimes you can even see it's not that different from 3G in certain cases. So there are many leaps forward. There's also many missed opportunities. The network slicing aspect is, I think, one way of trying to provide that optimization, that customization to the current situations that I spoke about before. Going back to earlier in our conversation, I mentioned my work in network coding and the fact that network coding provides
Starting point is 00:14:51 low latency, efficiency, reliability. Really what it does in effect is it takes services which are individually suboptimal and unreliable and in a synthetic fashion provides a service that is reliable and timely. That's really what it does. Whether it's over a single service, whether it's over a heterogeneous set of links, it allows you to basically synthesize the service that you need that has the right, you know, the most efficient fashion, not by just over-provisioning, that has the right, say, trade-off between delay and throughput. Often we think that one's the inverse of the other, but that's not the case.
Starting point is 00:15:42 I can have something where I don't care much about delay, but I really want high throughput. I can have something where I may need moderate throughput, but I need it in a very timely fashion, very tight frequency. And so providing that flexibility, that customizability is very, very important. I would say at this point, a lot of it is going to be done a little bit on top of 5G. That's to say that there are just, as I said, some big leaps forward, but there are still some really missed opportunities that may mean that you have to do it a little bit post hoc. And so I'm hoping that in 6G, we sort of remove that for being post hoc and instead think from the get go organization or for, you know, for any different purpose. How can we ensure the security of that data as it moves across the network? And what technologies are being worked on today to create that trust, both for data in flight and data at rest being stored somewhere on the edge or somewhere
Starting point is 00:17:14 in a core network? I think that that's core. That's core to having anything work. And I like the fact that you've talked about both data at rest and data in flight. It goes back, connects back to what you mentioned about the edge before, you know, the edge is sort of, it's data semi-at rest, semi-in flight, you know, it's resting for a little bit, but it's not meant to stay there, you know, and caches do the same. So I think that sort of melding together of communications, storage, and networking means that also even the differentiation between rest and flight itself becomes more and more fluid. I think that I would start out by saying the horse has bolted. That doesn't sound very optimistic, Muriel. It is actually optimistic, but let me start with saying, I mean, just from starters, even if you secured all your
Starting point is 00:18:06 comms, as you said, there's caching, there's storage, you know, so maybe you secured your comms and then you put it somewhere in the cloud that wasn't so safe. So who cares if you secured your comms? Even if you think you secured your entire 5G network and you made sure that people were really secure with their cloud. Most of 5G, already most of 4G, is Wi-Fi. My phone is on Wi-Fi. More often it's connected to a 5G station. So are we going to secure my local coffee shop's Wi-Fi router? I don't think so.
Starting point is 00:18:47 So I think the only approach that's viable is really to, I mean, it doesn't mean that you willy-nilly do risky things, but, you know, just accept that we are where we are and to try to instead have trusted communications, trusted stores, trusted computing over untrusted. I spoke about coding. You know, one of the really important aspects of coding is that the sort of mixing that happens is actually very much akin to the kind of mixing that happens for cryptography,
Starting point is 00:19:21 and in particular for a very important portion of cryptography, which is post-quantum secure. I mean, we have a bit of a perfect storm right now, Alison. On the one hand, we have an explosion of heterogeneous, geo-dispersed computation storage communication. My data is all over the place. Who knows where it ends up? At the same time, we have a steady and surprisingly accelerating erosion, if you will, of traditional crypto. I'll give you an example.
Starting point is 00:20:00 Peter Shore, one of my colleagues at MIT, showed a few years ago that if you have a quantum computer, one of the workhorses of cryptography, the RSA scheme, co-invented by Ron Rivest, another one of my colleagues at MIT, is, you know, liable to be broken. I hate to say broken, but, know, let's say it's to quantum computing. Anybody who reads anything around tech is aware of the rise of quantum computing. Again, there are other colleagues in my lab, in the research lab of electronics, who are advancing quantum computing at a remarkable speed, making great strides. So you have, on the one hand, a much more messy, uncertain architecture. On the other hand, an erosion of traditional crypto. You need something to fill it.
Starting point is 00:20:56 So I am very optimistic, actually. But again, I think this goes back to what I said before. We need to take this opportunity now. We have a window now to ensure that we create an architecture that allows these sorts of solutions to be implemented to help people to be instantiated without being entirely post hoc. So for instance, the types of, and there are not very many algorithms, which if you will, have sort of stood this test of time on the security side in the post-quantum sense for which there are no known attacks
Starting point is 00:21:34 with a quantum computer. They're actually based almost all coding. And again, many of your listeners may have been following the activity that NIST has been trying to do, the National Institute of Standards, around providing some recommended post-quantum secure output. But actually, the same coding that allows you to be, or let's say, flavors of the types of coding that allows you to be more reliable, more efficient, more timely, also can allow you, when appropriately
Starting point is 00:22:07 implemented, to be secure over untrusted systems. You've laid out so many things for us to think about. I have one final question for you. As we head into MWC and you look at the landscape of things that are going to be discussed there, you talk about this window of opportunity. What are the key things that you want to see in 2023 from the world of academia, from the industry that tells you that we're on the right path, that we are not going to blow this window and end up with a large mess? I think that what I'd like to see and which I'm already seeing, I think more and more is a to see and which I'm already seeing, I think, more and
Starting point is 00:22:45 more is a willingness to engage on both sides. And by that I mean I'm hoping that there will be a rekindling of links between industry and academia where industry, particularly if we go to a more modular approach, is interested in acquiring innovation because it can be now incorporated. There's a spot for it, right? There's no point in learning about innovation if you've got no place to put it. And I think also I'm hoping that academia will be interested in sort of reconnecting more with industry because they will feel that there is a place for it. And I think that that will lead to more collaboration
Starting point is 00:23:33 and more interesting academic work, more bold and forward-thinking industrial work. Not that the academic work was not interesting, but it's easier to just get lost in things that aren't as relevant or as applicable as they could be. And it's not that industry wasn't innovative, but maybe it might have been less needed if a lot of things were fixed and therefore there was only so much room for innovation. So I'm hoping that there is on both sides a real desire to, as I said, sort of reconnect. Fantastic. I can't wait to ask you this question in a year to see how we've done. It's a very important time as we move forward
Starting point is 00:24:17 with 6G and defining what our future looks like. Thank you so much for being on the program today, Muriel. Where can folks connect with you if they want to continue the dialogue? Thank you for asking. Well, first of all, thank you very much for having me for the invitation. The best way to connect is by email. My email is my last name, maddard at mit.edu or on LinkedIn. I'm also easy to find on LinkedIn. Thanks so much for being on and thank you for the technology that you're working on as it contributes greatly to how we're communicating today and every day. I can't wait to see your session at MWC. Thank you, Allison. I look forward
Starting point is 00:24:59 to seeing you in Barcelona.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.