ACM ByteCast - Xin Luna Dong - Episode 60

Episode Date: November 20, 2024

In this episode of ACM ByteCast, Bruke Kifle hosts ACM and IEEE Fellow Xin Luna Dong, Principal Scientist at Meta Reality Labs. She has significantly contributed to the development of knowledge graphs..., a tool essential for organizing data into understandable relationships. Prior to joining Meta, Luna spent nearly a decade working on knowledge graphs at Amazon and Google. Before that, she spent another decade working on data integration and cleaning at AT&T Labs. She has been a leader in ML applications, working on intelligent personal assistants, search, recommendation, and personalization systems, including products such as Ray-Ban Meta. Her honors and recognitions include the VLDB Women in Database Research Award and the VLDB Early Career Research Contribution Award. Luna shares how early experiences growing up in China sparked her interest in computing, and how her PhD experience in data integration lay the groundwork for future work with knowledge graphs. Luna and Bruke dive into the relevance and structure of knowledge graphs, and her work on Google Knowledge Graph and Amazon Product Knowledge Graph. She talks about the progression of data integration methodologies over the past two decades, how the rise of ML and AI has given rise to a new one, and how knowledge graphs can enhance LLMs. She also mentions promising emerging technologies for answer generation and recommender systems such as Retrieval-Augmented Generation (RAG), and her work on the Comprehensive RAG Benchmark (CRAC) and the KDD Cup competition. Luna also shares her passion for making information access effortless, especially for non-technical users such as small business owners, and suggests some solutions.

Transcript
Discussion (0)
Starting point is 00:00:01 This is ACM ByteCast, a podcast series from the Association for Computing Machinery, the world's largest education and scientific computing society. We talk to researchers, practitioners, and innovators who are at the intersection of computing research and practice. They share their experiences, the lessons they've learned, and their own visions for the future of computing. I am your host, Brooke Kifle. The rapid evolution of artificial intelligence and data management has redefined how we access, interact with, and make sense of the vast amount of information available in the digital age.
Starting point is 00:00:40 At the heart of this transformation are knowledge graphs, an innovation that connects and organizes disparate data into structured, meaningful insights. These powerful systems enable machines to understand complex relationships between data points, opening new frontiers in search, personalization, question answering, and beyond. From early breakthroughs in data integration to pioneering the creation of knowledge graphs at scale, our next guest, Dr. Shin Luna Dong, has been at the forefront of this field for over two decades as a world-renowned expert in knowledge graphs and data integration. Luna is a principal scientist at Meta Reality Labs, leading the ML efforts in building an intelligent personal assistant, innovating and productionizing techniques on contextual AI, multimodal conversations, search, question answering, recommendation, and personalization. Prior to joining Meta, she spent nearly a decade working on knowledge graphs at Amazon and Google, and another decade on data integration and
Starting point is 00:01:36 cleaning at AT&T Labs and at University of Washington, where she received her PhD in computer science. She's the recipient of various awards, including the ACM Fellow and IEEE Fellow. Dr. Xinluna Dong, welcome to ByteCast. Thank you. Thank you very much for the intro. You know, I'm very excited for this conversation. I'd love to open it up with sort of an open question, just to understand what are some of the key inflection points in your personal and professional journey that have inspired you to pursue a career in computing and specifically in knowledge graphs and data integration? Nice. That's a very good question. So it's actually a whole bunch of things
Starting point is 00:02:20 happening naturally one after another that eventually lead me to do computing and eventually lead me to do knowledge graphs. So let's start with computing. So I was born in China and I grew up there. When I was eight years old, that's the time the country is still poor. And I remember we still need tickets to buy rice and we can buy like fish twice every month. And so there is like, there was one day, my mom, she worked for middle school. And she told me they got a computer. So that is, I think it's called COM35. And another one is Apple II. So they got two personal computers. And she told me, hey, you know what?
Starting point is 00:03:12 You can come to play video games. So that's my first interaction with computers. And then I was in my third grade in elementary school. And after playing the video games, very simple ones, I started learning coding programming. And again, that was very hard for me because at that time I haven't learned English. And I remember I look at the very simple code, like computing the sum from one to 100. And I look at the code and I have no idea what it means and why it works. And at that time, I remember when I see I-F, I don't know what is the meaning of that. But basically, I learned,
Starting point is 00:04:01 okay, this means there are two branches, and the T-H-E-N goes to one branch and the E-L-S-E goes to another branch. And it's similar for other commands. So that's how I got started. It's just a magic to me. And even though most of the time, as I recall, I just typed in the code letter by letter, number by number. But then seeing the results is fascinating. And then gradually, I started understanding why it works. And I coded and started participating in those coding competitions.
Starting point is 00:04:40 And I remember the turning point is high school. So when I was at high school, I was starting thinking about what I should do for college and for future career. And there were many like suggestions, proposals from my friends, from my parents, but nobody said computer science or doing coding is a good job.
Starting point is 00:05:07 And at that time, I was hesitating. And my computer teacher, coding teacher, handed me some book explaining the A plus algorithm. Sorry, A star algorithm. And then I realized, oh, there is some way to give computers some intelligence. And it is not just like a fast and traverse the whole tree, traverse all of the solution space. It can actually do some like very smart cutoff and do something smart to find the solution. It's so fast. It is so smart and it could do something that is really amazing. So I remember that A star algorithm so well. And that's the point when I started thinking, okay, maybe I will just do this for college. And then I got my bachelor degree on computer science,
Starting point is 00:06:14 then master's degree, and then PhD on computer science. That's how it naturally goes to computing as my career. Yes, that is about computing. Amazing life story. And I think it's quite exciting hearing you say that you learned programming before you learned English. And so your first coding language to actually help you learn the language as well is actually quite such a unique experience. You know, beyond your sort of entry or journey into the computing profession more broadly, what prompted some of your interest in knowledge graphs and your studies? Yeah. So this somehow also is related
Starting point is 00:06:53 to my childhood. So when I grew up, before I went to elementary school, again, the family is so poor that I don't remember I have many books, maybe a handful of books I could read, but I do not have books of my own. And I also remember when I went to elementary school at some point, I think that's my third grade. Finally, we got this library card and this was like a huge gift to me. Why do I mention this? Because we don't have books. It's so hard to get to information, to get new information. So whatever questions you have, you don't have much to read to understand it. And I remember we have newspapers. So I remember my mom oftentimes will like cut some of the newspaper articles and then paste it to some other old newspapers or like magazines or whatever. And that's how we collect information. So I would say in my childhood, even until like mid-high school, it is kind of this crave for information.
Starting point is 00:08:10 How to get to understand more information, get more information, and get some information to answer my questions. That's always this craving. How do I get that? And then suddenly, I think it is at the time when I went to graduate school, suddenly everything changed. And we found, okay, on the web, there is so much information and you can't easily find what you want. And that is actually pre-Google time. And with all of this, there is this idea of I want to get all of the information. I want to organize them in some way that I can easily find things I like. And those things are, I would say, subconsciously. And then other set of like coincidences, I got an offer from UW. My advisor is Alon Halebe. He worked on data integration
Starting point is 00:09:08 and I was assigned as his like a temporary student for my first year of PhD. And then gradually learned what he is doing and found it fascinating. And all of this come together that I started working on data integration. And after I work on data integration for almost a decade, so that is the time when there has been this knowledge card launched on Google search. And that is the time I started knowing knowledge graphs and knowledge integration, which I would say is a natural extension of what I have been working on data integration. And then I came to this field. Wow, that's such a beautiful story. your personal journey and your personal desire from a young age for knowledge, for information, and now being able to pioneer essentially a lot of the work that's enabling knowledge and information discovery for millions of users, billions of users at a global scale, I think is
Starting point is 00:10:19 quite a beautiful journey. But with that, I actually want to learn or dive a bit deeper on some of your work on the creation of knowledge graphs at scale. Obviously, you've had an impact at Google, at Amazon, at Meta. For our users or for our audience that may not be familiar, could you maybe describe what is a knowledge graph? Why is it relevant? What does it do? What does it help us accomplish? And maybe in the context of some of the everyday products or services that a lot of people are used to using, what are some of the most interesting or impactful use cases in products like Google and Amazon and Meta? is a graph. So it has nodes and edges. Each node represents an entity, a real-world entity, and each edge represents the relationships between the entities. And a knowledge graph is beautiful for two reasons. First, it is in the graph structure, and so it is structured and it is kind of mimic how people understand the world, how human beings understand the world, entity and relationships between them.
Starting point is 00:11:35 And it makes it easier to understand information and to query information. That's number one. Number two, knowledge graphs also have good reputation in terms of the quality, quality in terms of the richness of the knowledge and also the cleanness of the knowledge. It is highly accurate, high coverage. And so this basically is a good store and a giant store of high quality information. So how has it been changing our daily lives? So the first example, and that's also the first success for KnowledgeRest
Starting point is 00:12:21 is the knowledge panels in search engines for Google, for Bing. When you search something like Obama's wife, you will see a knowledge panel on the right give you the information about Barack Obama as an example. And nowadays, because the Google Knowledge Graph has really grown in the past decades, and for a lot of search queries, you will see this knowledge panel, which put all of the basic information there in a form that is very easy to understand. And the second example I could give is my work at Amazon. And this is also to build a knowledge graph, but for products. And there are two examples why it is useful. One is for digital products, because the knowledge graphs helps normalize information and find the relationships between the entities. We are able to, when we build the knowledge graphs, we are able to connect the low resolution and high resolution songs, for example, music tracks. And one use
Starting point is 00:13:35 case as an example is for the users of Amazon Music, and they could sign up to listen to the high resolution songs. And because we understand the relationships of the songs with a different quality, we can make sure we always serve the high quality ones when it is available. And if not, we then like serve the medium or lower quality songs. So that comes from the normalization part
Starting point is 00:14:07 and relation part of the knowledge graphs. Another usage is that for all of the products, it's very hard to figure out all of the information. And as we build the knowledge graph, again, we generate the attribute value pairs and show that at the Amazon detail pages. And finally, coming to our work at Meta. So here we are building smart assistants. And one assistant as an example is on the wearable devices. It's called Ribbon Met and it's some glasses you can wear, and you can ask
Starting point is 00:14:47 questions to the glasses. And when you ask questions, this is basically question answering, and it needs to pull information from different sources to answer the questions. And we found using the knowledge graphs, we can reduce the latency of QA, question answering, by one second. And we can also improve the quality of this answer generation when we use large language models. So here are the several examples. I think that's, you know, you really described the importance of this technology, but also just how widespread it is in the day-to-day products that we use, whether it be music streaming, product sort of shopping, or even things like search. I'm quite familiar with the search space having worked on the Microsoft Bing product. And so knowledge graphs were a very integral part
Starting point is 00:15:45 of the experience that you described with the knowledge panel. So I think it's quite exciting to see how much of a foundational sort of core technology this is for discovery and information access. You know, one thing that came to mind, you know, as you were describing, obviously a big part or a core foundation of knowledge models or knowledge graphs is clean, high quality, high fidelity data. And in this digital age, there's a lot of data, the ability to extract, label, and actually build these knowledge graphs, I presume is very challenging. And so how have some of the challenges associated with data extraction, cleaning, labeling and integration sort of evolved, especially with the age or the rise of machine learning and AI techniques? Have you found it to improve? Is it a process? Has it facilitated sort of the knowledge graph process? That's a very good question. So let's first see there have been different generations of methods in terms of extracting, integrating, and cleaning information. The first one, I would call it runtime data integration. So in a sense, web search is one
Starting point is 00:17:02 runtime. You ask a question, a query, search query, and then you see 10 blue links, and then you look at them and figure out your answer. And in parallel to that, the database community comes up with this data integration ideas, where you get one query, this is a database query, And that is translated into the queries that could be understood by the underlying data sources. And their answers are retrieved, sent back to the middle point, and answers are unioned and returned to the users. So that's two decades ago. And that is kind of this runtime data integration. And Knowledge Graph provides this offline data integration. When we build the Knowledge Graphs, in a sense, we are assembling, integrating all of the information, oftentimes in heterogeneous forms, putting them together, normalize it, and then serve it at runtime. This makes a lot of hard work done at
Starting point is 00:18:09 the offline time. So I would call that the second generation. So as we have all of the new AI technologies and machine learning methods, we kind of get the tool to improve each step. So in addition, we get one new generation of data integration. I would call that a data internalization or knowledge internalization into the large language models. And when we train those large language models, they get a lot of data from the web and try to internalize the popular knowledge which occur often on the web into the large language models. So this is kind of a different way of integrating the information. So that's kind of the third generation of data integration. So to recap, two ways that the machine learning models and the large language models are sort of evolving data extraction,
Starting point is 00:19:21 data cleaning. The first one is basically to give new tools to generate better extraction cleaning results. And the second one is to provide a whole new generation of methods for data integration. That's very exciting. I think it's quite exciting to see how, you know, the rise of machine learning and AI techniques are driving improvements in how we do extraction, cleaning, labeling,
Starting point is 00:19:51 and integration. But I think there's also the overarching question of, at present, we're seeing a lot of impressive results with large language models that are revolutionizing natural language processing, a lot of the core tests. And I think you touched on it with some of the work at Meta with the Ray-Ban glasses. So in your view, how do you see knowledge graphs fitting into the future of sort of the tech landscape? Do you see them as complementary to LLMs? Obviously, you described the use case of LLMs or tools to help generate data and cleaning and labeling, but as it pertains to the actual uses in some of the core technologies, whether it be search, whether it be personal assistance, do you see LLMs and knowledge graphs as complementary? Do they serve distinct roles? Where do you see these two coming together? Very good question. So let's say what people
Starting point is 00:20:53 are expecting early last year. So that's the time suddenly everyone is aware of Gen AI, aware of large language models and hoping large language models can do everything in and providing smooth conversations in QA. So at that time, the hope is whatever questions we ask, large language models will answer it. To achieve that goal, basically it requires large language models to have all of the knowledge, word knowledge. And of course, or even every year. For those questions, large language models have very low quality in terms of question answering. Large language models also are not good at answering questions regarding torso-to-tail entities. And surprisingly, I would say oftentimes less than 1% of the entities are head entities. In other words, for 99% of the entities,
Starting point is 00:22:18 they fall in the bucket of torso-to-tail, less popular or not popular at all. And large language models do not have rich knowledge about them and often hallucinate when answer questions about them. And the third thing is in some specific areas, for example, biology, medicine, and large language models do not necessarily have all of the information. And even for basic things like the taxonomy of the concepts, large language models are not good at them. So even though large language models are very good at generating the answers and understand the texts, it does not have all of this information. And so in future, I would guess, hypothesize, that first large language models will continue to be a very good interface to interact with the users, answer questions, understand the user's needs. And in addition, it will continue to have better and better reasoning
Starting point is 00:23:26 capabilities and so can answer complex questions. Third, it will have more and more knowledge, but it may not get all of the word knowledge, especially the factual information and even the taxonomy information. It may not get all of that internalized in the model itself. And it will then resort to knowledge graphs and maybe some other data sources for such information. I would use an analogy. So just like human beings, we have some knowledge in our head and we could reason. And we can, for example, when we write an article, we can do a pretty good job.
Starting point is 00:24:12 However, there are often information, numbers, dates that we cannot remember. And then we need to refer to some external data sources. A knowledge graph will serve as one of such external data sources. And knowledge graph will serve as one of such important data sources. ACM ByteCast is available on Apple Podcasts, Google Podcasts, Podbean, Spotify, Stitcher, and TuneIn. If you're enjoying this episode, please subscribe and leave us a review on your favorite platform. of entry towards general intelligence, but also identifying some of legitimate shortcomings as it pertains to having a knowledge of entities, having a knowledge built in that is fully comprehensive of the world. And so really calling the need or value for knowledge graphs, I think,
Starting point is 00:25:19 is a very core argument here. I'm curious, as you look to the future, what emerging, of course, alongside some of the developments with LLM performance, what emerging technologies or trends you're most excited about in terms of the value of knowledge graphs, but also improving knowledge graph creation, whether it be multimodal AI or progress that's being made in that direction. So what are some emerging technologies that you think will have some critical value in the application, but also the creation of knowledge graphs? Sure, sure. So I view as my mission to help people access information. And I call it provide the right information at the right time.
Starting point is 00:26:09 And this basically requires a few things. The first one is we really need to provide relevant and accurate information. So that's the first part. And the second part is we need to be able to provide information in various modalities. And we need to understand stuff in various modalities, like visual information and context information. And the third part is when we provide such information, we also want to provide it in a way that is personalized to address the user's needs. So related to this, I would say there are a few things that I find it very fascinating. I'm super interested, and I'm also hoping to contribute to it. And the first one, of course, this is RAG. It's basically retrieval augmented generation and how to have the large language models retrieve information from valuable data
Starting point is 00:27:16 sources, including knowledge graphs, and then generate answers, recommendations, et cetera, to the user to answer user questions. So I personally have been working in this field in the past, I would say, nearly 18 months. And it's a lot of work to do. It sounds very natural and simple at the first glance. But if we use the obvious methods, it just does not give us the best results. And this year, we came up with this benchmark called the CRAG, Comprehensive RAG Benchmark.
Starting point is 00:27:59 And we use it to host the KDD Cup competition. And we also used it to host the KDD Cup computation solutions and to state of the art results. But if we gave a score, the score is 0.5 out of one. So we are only halfway there. So there are still a lot to do to improve everything. So that's one area I'm very interested in. And I can see once we make solid progress on that, we can change people's experiences in terms of this getting information and addressing their information needs. So that's the first one. And the second one I'm super interested in is how to really build such information
Starting point is 00:29:13 to allow effortless access to proprietary data. So let's say I have some enterprise data or I'm a small business owner and I have a small catalog, or I'm for this field and I have information about this particular small field. And how can I easily serve the data to external people? And oftentimes those people are not technical savvy and it will take them tremendous amount of efforts to build their own QA system. But how will we be able to have something that is generalizable and can sort of access the information they have in their own storage and access it and use it to answer questions.
Starting point is 00:30:09 So this is the second topic I'm super interested in. And this is, again, related to RAG. The third thing I'm interested in is personalization. So in a sense, I don't know if you have heard of this like Memex vision. This was from 1945, as far as I remember. And the idea is someone wear a camera on their forehead and record whatever they see for their lifetime. And this digitalize their life and they can sort of ask any question about their past. And in a sense, we have made a lot of progress on this, but on the other hand, we are not able
Starting point is 00:30:54 to do this yet. But we are getting closer and closer to this. And will it happen that eventually we have an assistant that would view the world from our own perspectives, look at what has been happening to us, and then use it to answer our utility question as well as provide personalized answers? As an example, we recently launched this feature, RBM, smart glasses. Basically, you could, at the time you park, you could say, okay, remember this parking lot number. And then later on, when you come back to find your car, you can ask for, oh, where did I park? And there are many, many such cases where you could remember your past. And how can we make it even more intelligent? This whole personalized information system, management system, recommendation system to basically build the second brain of people, that will be fascinating as well. And finally, what I want to mention is this contextualization and how do we contextualize everything, QA, recommendation, understand the
Starting point is 00:32:16 user's context and provide contextualize the service, provide proactive services. So that will be very interesting as well. That's a long answer. No, no, I think that's quite exciting. As you're describing some of the work or your future vision for personalization with the ability to effectively have an assistant or a second brain, I think I was just really fascinated
Starting point is 00:32:44 by what a future like that might look like. But I think, you know, a lot of great, great visions for what the future might look like in this space, whether it be the personalization, whether it be how we build systems that are more contextualized or stateful to users and thinking about, you know,
Starting point is 00:33:00 RAG as sort of a key enabler for that and recognizing that certainly there's a lot of progress to be made with retrieval augmented generation, but also thinking about how we can bring that to proprietary data that may not be publicly available or on the web. And so I think those are all critical unlocks for improving the way humans interact with technology and providing additional value and then ensuring that this is accessible in multiple use cases as well. So I think you described a lot of interesting things. I'm curious, are there any upcoming projects or interesting areas of research at Meta Reality Labs?
Starting point is 00:33:40 Or I guess you touched on some of them here, but any developments within maybe the broader AI community that you're particularly excited about, perhaps within the knowledge graph space, data integration space, but also just more broadly, any interesting directions that keep you up at night and are quite exciting for you?
Starting point is 00:34:00 Sure. So I have been mentioning about all of this, like a factuality, personalization, contextualization, multimodality. All of these are interesting projects I have been talking about data integration. This is a problem that different communities have been working on for decades. And it's not a solved problem. And nowadays, when we have data
Starting point is 00:34:35 from different data sources with heterogeneity on the schema, on the form of the data, we still have difficulties to seamlessly integrate them. But I'm hoping eventually in the next decade, maybe not long,
Starting point is 00:34:53 in the next decade, we will be able to provide some seamless fusion and integration of data. And it's not just the data itself. It is the seamless fusion of data and models. And yes, and here models are JAI models, large language models. So some of the data will be internalized into the large language models, JAI models. Some of the data will stay at their original form
Starting point is 00:35:26 and we don't necessarily need to do a lot of data manipulation, data massaging. And some of the data will be put together into something like knowledge graphs. So I kind of feel this is a field that is so hard and we haven't found a solution yet. But with knowledge graph and large language models all coming in space, and we might be able to get there in the next decade to really provide this seamless, I call it dual neural knowledge. Basically, we have knowledge in symbolic forms, in knowledge graphs, and also in neural forms in large language models.
Starting point is 00:36:08 And then people can seamlessly access them through the large language models. I'm fascinated by that vision, and I hope that could happen soon. I think certainly with people like you driving the future of this area, I have no doubt that that will be possible. But I think you touch on a very exciting future for the role of data in have a lot of audience members or audiences around the world who are interested in taking inspiration from the even knowledge graphs or information systems, as they look to embark on a career or sort of a profession in this space? Sure. That's a great question. And I think I have two suggestions. So I started working on data integration in the year of 2002.
Starting point is 00:37:30 It's a little bit over 20 years. And the technologies have evolved so much, improved so much. And the tools we used at that time was extremely different from the tools we used 10 years ago and is very different from the tools we use now. to sort of always make progress and to kind of contribute to the renovations. It's very important to always learn. And there are always a lot of things to learn and how to manage that. I would say my method is to first go deep. So I find an area that is relevant,
Starting point is 00:38:27 and then I go quite deep. And after that, this go deep meaning I read a bunch of papers. It also means I do some of my own research. So I have fairly deep understanding of this small area or maybe a reasonably big area. And after that, broaden it and go deep again. So this kind of brought me from data integration to data quality, meaning integration plus cleaning, to knowledge integration, and then to knowledge graph construction, all of the cleaning integration extraction work, and then to all of this knowledge graph construction, knowledge graph application, and smart assistance. I feel like going deep, broaden, going deep, broaden. This allows me to learn a lot of new stuff to gradually
Starting point is 00:39:27 achieve the goal I had from the very beginning. So that's one thing about keep open-minded. Another thing about keep open-minded is as we enter a field, we often learn something and then form some hypothesis. And for example, I grew up from the database community and I started with thinking that structured data is the best way or is the way that people use to kind of store their data, to access their data. And with those hypotheses, it might limit what I could do. And related to this, keep open-minded, meaning oftentimes like jump out of the box and re-examine all of the hypotheses. So for example, honestly, last time when I changed my job,
Starting point is 00:40:29 when I moved from Amazon to Meta, I chose a field that is not necessarily directly related to knowledge graphs. Knowledge graphs is a part of it, but a small part. And I just want to see to serve end users, are knowledge graphs absolutely needed? And in which way? And what are other sources, information sources or methods that are critical?
Starting point is 00:40:58 I don't want to just like limit myself thinking knowledge graphs are the only way to do it. So I think I really benefited from that trial. It is not always easy, but this allows me to broaden my scope to, it kind of opened a new door for me. So that's my first pieces of advice. And the second one is, interestingly, it's almost the opposite, focus, focus, focus, focus. And I personally have been active in multiple different research fields, like database, data mining, and recently NLP and adding multi-model as well. And also I have been working as a scientist in industry. So I do research, write papers,
Starting point is 00:41:57 and meanwhile I work on productionizing technologies, building features. And I did like go through the different steps like building up prototypes and then develop technologies and then pushing the last mile to get things out. And it's a big diversity of the stuff. But on the other hand, I feel it's both learning and the lessons. The learning is for all of the stuff I have been doing, there is one theme into it, how to help people access information easily. And because of that, although it could be things from different communities, from different industry versus research, but it all come under the same theme. So there is a focus there.
Starting point is 00:42:46 And it is still much easier for me to grasp information from neighboring communities, neighboring fields, and to enrich my tool set. And the second one, I would say, is learning. Sometimes I got ambitious and I want to do everything. And gradually, I realized, okay, here is my passion. Here is my strength. And I have limited time. Life is short and really, really drill down to what excited me and also what I'm good at. That's such an amazing set of pieces of advice accumulated over such a rich career as a
Starting point is 00:43:26 researcher, as a practitioner, as a developer of products used by millions. And I'll just quickly, quickly synthesize them. It's a balance of both having a focus. So in your particular case, the central theme of knowledge discovery and access to information, but within that, keeping an open mind, whether it be out ofof-the-box thinking to examine sort of the work that you're doing in the problem that you're solving, but also in having sort of a lifelong learning mentality. And so exploring depth but also equally exploring breadth. And so I think that's a great set of advice for those looking to explore a career in computing
Starting point is 00:44:06 or more broadly, just discover their life passion and their life career as well. And so Dr. Luna, we just want to say thank you for joining us on ByteCast. This has been an amazing discussion and we certainly look forward to the future impact that you will continue to drive in your line of work. Thank you very much.
Starting point is 00:44:29 ACM ByteCast is a production of the Association for Computing Machinery's Practitioner Board. To learn more about ACM and its activities, visit acm.org. For more information about this and other episodes, please visit our website at learning.acm.org. That's learning.acm.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.