Big Technology Podcast - Google Research Head Yossi Mathias: AI For Cancer Research, Quantum's Progress, Researchers' Future

Episode Date: October 27, 2025

Yossi Matias is the head of Google Research. He joins Big Technology Podcast to discuss the company's research efforts in areas like cancer treatment and Quantum and to discuss the relationship betwee...n research and product. Tune in to hear how Google used LLMs to generate a cancer hypothesis validated in living cells, what a “13,000×” quantum result really means, and how the research product loop turns papers into products. We also cover whether AI can automate a researcher's job. This conversation was recorded in front of a live audience at Google's Mountain View headquarters. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 The head of Google Research joins us to talk about AI for cancer research, quantum, and whether product and research are getting too close together. That conversation, in front of a live audience of researchers and media, at Google's Mountain View headquarters, is coming up right after this. Capital One's tech team isn't just talking about multi-agentic AI. They already deployed one. It's called chat concierge, and it's simplifying car shopping, Using self-reflection and layered reasoning with live API checks,
Starting point is 00:00:32 it doesn't just help buyers find a car they love. It helps schedule a test drive, get pre-approved for financing, and estimate trade and value. Advanced, intuitive, and deployed. That's how they stack. That's technology at Capital One. Hey, everyone. I'm Alex Cantorwitz. I'm the host of Big Technology Podcast,
Starting point is 00:00:52 and I'm thrilled to be here for a conversation with the head of Google Research, Yossi Matthias about the future of research and how it intersects with product. Yossi, great to see you. Well, thanks for being here, Alex. So, there's been a lot of noise in the AI world recently, a lot of noise. But recently, Google has come up with a hypothesis about cancer cell behavior with an LLM that was then proven out in a living cell. So can you talk a little bit about the significance of this and how it came about?
Starting point is 00:01:23 Is this the beginning of generative AI being used to? potentially cure cancer or was it lucky? What should we think about it? Yeah. First, I think that obviously we see the progress on AI is transformative. And one of the areas that AI can probably do more impact than anything is a healthcare. Because healthcare is really about information-based kind of science. Now, when you bring together disciplines, then obviously you unlock new opportunities. And with AI models, generative AI, we now have better understanding to understand patterns. And by all means, is in a sequence of a lot of research work.
Starting point is 00:02:01 And a lot of magic happens with collaborations. So this one, for example, on the cell to sentence is a collaboration with Yale, researchers and researchers from Google Research and Google DeepMine looking into how to leverage foundation models in combination with the data that we have on cells. So I think that it's a step towards, obviously some of the biggest challenges
Starting point is 00:02:23 that we have on healthcare. There's a lot of more work to do. It's part of a journey. I mean, we're looking into how to use generative AI models, actually for a few years now, how to adapt them to models, how to help them with diagnostics, how to actually empower researchers with the like of AI co-scientists,
Starting point is 00:02:41 which I think about it is really using AI agents to help out sift through the information and do the kind of work that in the past, only, you know, very sophisticated people could do, and now we can actually unlock these opportunities and empower the researchers to ask even bigger questions. Right. I did the reading. It seems like what happened with this model is that it found a substance that hadn't been used to treat cancer cells. They basically get them to raise their hands
Starting point is 00:03:07 to the immune system, which is pretty amazing. Yeah, if you think about it, there's so much information there that we have yet to unlock. Actually, in many cases, we don't know what we don't know. That's why the scientific process of looking for hypothesis. And again, by the way, this is the basic for a eco-scientist, which is about how to help out with generate these hypothesis. But when you think about projects and effort such as the cell to sentence, it's really about how do we actually leverage an AI on the cell information in this case to actually identify the kind of patterns that may be hidden out there.
Starting point is 00:03:41 And again, under the assumption that, you know, there are hints all over the place. One of our effort on the scientific process is to uncover, identify these hints, test them, validate them. This all takes a lot of effort and time, and AI is really empowering that research and accelerates it. Okay. So let's talk about quantum briefly. Google this week had a quantum breakthrough
Starting point is 00:04:04 where the quantum chip was able to do complete an algorithm 13,000 times faster than a traditional supercomputer. It's one of those headlines that we see all the time about quantum. Maybe it's, you know, to the public, it seems more frequent than it does when you're actually doing the research. But we see these breakthrough headlines about quantum frequently.
Starting point is 00:04:23 And then when you ask, well, how far away are we from quantum computing? It's always five, ten years, maybe longer. So can you explain that disconnect and how real we should think quantum is today? So first, quantum computing is a very long-term quest, right? I mean, if you look into some of the basic research, a lot of that goes back to the 80s. In fact, we're very thrilled just recently to have our very own. Michel de Vore, recognized with his colleagues, John Clark and John Martinez,
Starting point is 00:04:56 and being a Nobel laureate for their work from the 80s. And Michelle and colleagues are actually working in our fabulous AI quantum lab in actually building on some of those early scientific breakthroughs and building what we believe is going to be a practical quantum computing. Now, of course, it's a long-term effort. Unlike many of the research efforts
Starting point is 00:05:21 that sometimes will take months or a few years, this one really goes back. But back in 2812, we actually decided, we actually decided that this is time to invest in that. And we have a very steady progress on very measurable timeline and very clear milestone. And of course, everything is validated. This announcement of yesterday is a paper nature
Starting point is 00:05:45 that actually shows the first very first very reliable practical application advantage of a quantum computer over classical computer. And if you think about it, this unlocks potential opportunities, future opportunities on better understanding of molecules and so many different applications. So we see a steady state. Obviously there's a lot of more work to be done. The important thing is actually to make sure that we're having these milestones. And I'm quite optimistic that we are going to see these real life applications in the framework
Starting point is 00:06:16 of about five years. Right. How does quantum change the world if it works? Well, the fact that we are going to be able to ask question and get answers on the kind of, you know, information that is practically out of reach today, that's going to be material change because it's better understanding of the materials of molecules. And it's also going to accelerate AI itself because suddenly we're actually going to have more, you know, if you think about it, the AI today is built on knowledge that we accumulate, and
Starting point is 00:06:48 and build with computation, and then we take it and build the models based on that. Now, just imagine that now you're going to have the capability to create new insights into the world that can then be fed and amplified with AI. So I think it's going to be material change, no pun intended. And exciting thing about research and about this domain as well
Starting point is 00:07:13 is that a lot of the important things which are going to happen we're not even aware of. Because once you uncover opportunity, suddenly it creates the kind of thing that perhaps you did not anticipate, right? I mean, think about AI and what we can do today that for many of us seem like science fiction just a few years ago, and it's just accelerating.
Starting point is 00:07:31 So quantum is going to open up more. And think about the world where we're going to have many more smart people actually working on that. That's going to open up new insights, new novelty, new innovation, and I'm sure new world impact. So you're of the belief that if you bring product and research closer together, you actually end up getting more research breakthroughs faster.
Starting point is 00:07:53 One thing that, so first, I'm kind of both excited about deep research and intellectual curiosity and scientific research as well. I'm a product guy in the Google, I was actually over a decade on search leadership, working, you know, actually leading auto-complete in search and sports experience and trends and so forth. So on the other hand, of course, today, especially today,
Starting point is 00:08:16 It was always the case that research is a driver for everything that we do. But today it's more than ever, because when you think about innovation, a lot of it is built on unlocking capabilities that we should actually solve the research problem and then it goes back. This goes to what I'm really excited about, which I call the magic cycle of research, something I always was excited about. In fact, even as early on my career when I was in Bellabs in their hey days, my most theoretical research was motivated by real world examples and then actually taking the results and applying
Starting point is 00:08:51 them back was to me the most fascinating aspect. Today that's what we do all the time because all of our research projects and efforts are motivated by problems in the real world that if we solve it it would actually unlock opportunities. Some of them longer term, some of them would take years. Many of them are actually within months. Now this magic cycle is about how to drive breakthrough research motivated by real-world problems, then solving the problem, the research problem, quite often publishing it. You know, that's why it's so important to actually have the validation, the peer-reviewed and everything.
Starting point is 00:09:26 That's good. And then taking it back to applying it back to real-world applications on products, on businesses, on science and society, and this generates the next questions. Now this cycle, one of the magical things about Google research is that we are actually working through the entire cycle. And the same team quite often that actually had the breakthrough research is the team that would actually then bring it together with product teams and others, partners, to actually reality and go back to the next big questions and accelerate that.
Starting point is 00:09:58 But let me ask you, isn't there a danger of bringing product and research too close? I mean, you could have the researchers motivated to get into the product cycle. And product oftentimes, it's evaluated by growth, quarter to quarter, and you really want a long-term focus on research. So how do you think about that? Well, first, it's true that in any development environment, one of the important things is to have this balance between what you need to do tomorrow
Starting point is 00:10:24 and how to invest in the future, the innovation cycle, right? I mean, innovation dilemma in product development and businesses, of course, is well known. Research is no different in the sense that we need to manage those priorities all the time. So it's a judgment call, when is the time to actually focus on the breakthrough And quite often it's for long term.
Starting point is 00:10:43 Quite often, actually, you don't exactly know how it's going to be applied. You actually know that this is an important thing, right? You know that, well, if I can make LLMs more efficient, I know it's going to be important. If I can actually have better prediction for floods, oh, there's going to be a way for me actually to bring it to reality. Or if I'm going to have better understanding of health care or genomes, there's a way to do that. Then when you work with the product teams, one important thing, of course, is to know how to do that in an effective way. And by the way, quite often, people are so excited about actually bringing it to reality that I need sometimes to say, hey, it's time actually to go back to the next question.
Starting point is 00:11:20 Right. Because both product and research are so exciting, and having the right timing and the right judgment is always one of the decisions we need to do. So we've talked before, and one of the things that you brought up to me was something kind of counterintuitive because we hear, or maybe not surprising to me, we hear these terms tossed out, invention, innovation, research, breakthrough. breakthrough, breakthrough. But you think there's a real difference between an actual breakthrough and what innovation is. So can you just describe a little bit about what the difference between
Starting point is 00:11:49 innovation and a breakthrough is? Well, first, innovation is something that we're doing all the time. We should do that on product development, on the next generation of what we're going to build. I think that innovation is actually accelerating around the world with new capabilities. When I think about research breakthroughs, this is about problems that currently we don't know how to solve in principle, and we need to somehow make this dent. Now, sometimes some of the applied research is actually to bring together things that are known.
Starting point is 00:12:23 Innovation is something that we apply both on product but also on the research itself, because asking the right questions is one of the most important thing in any research. But also I mentioned earlier the magic cycle. When you think about the magic cycle, it's not, you know, I don't like the term technology transfer because life is never, you build something
Starting point is 00:12:41 let's transfer it and make it in use. It's always this cycle. It's always this making the judgment call. How can I take what I've already built and see and test it and have a pilot or test it out? And then ask the next question. So I think this is part of the innovation applied to the magic cycle itself. And some of the innovation is really understanding that, oh, if this capability is unlocked with research, this opens up all these new opportunities.
Starting point is 00:13:08 I mean, think about conversational AI, right? Some of it is really about early on. It was asking, can I actually have a conversation? And then the next one, how can I actually use it? And then it brings back to the question of what is actually the capability that they need to drive here and building on that? And it's really a combination of both research and innovation in this case. So how important then is the long-term research that is detached from the need to innovate right now? First, no research is detached.
Starting point is 00:13:37 Research, again, as I mentioned, the best research, is research that is motivated by either a need that you already know or by exploring the art of the possible. And when you think about exploring the art of the possible, it's motivated by saying, well, now if I manage to solve it, that is going to unlock things that are actually going to be meaningful for my business, for my products, for capabilities. So it's always connected. To your question, the importance of long-term research is more than ever.
Starting point is 00:14:08 And here's why we are actually, when you think about our job is really to drive breakthrough research that is going to be transformative, that could enable actually products and capabilities and experience and science and all societal challenges to actually be solved in a way that is materially better than we can do today. Now some of it is something you can actually innovate and find the kind of the shorter term research. A lot of it is really to find entirely new paradigm. to think about, I mean, think about the transformers that, you know, developed by Google
Starting point is 00:14:43 research back in 2017, it was a new paradigm that once done, it actually created a lot of the industry, or thinking about some of the work we're doing on genomics or quantum. Quantum, of course, is very long term, as we know. So in many areas, actually, I can see this combination of things that are, we can do that very quickly because with breakthroughs and research and we can have a new algorithm and then apply it very quickly. Speculative decoding is a great example. You know, once we had the right insights, we could very quickly actually apply it,
Starting point is 00:15:16 and then it got its own kind of impact across the industry and industry standard as well, and many variations. And I think that you need to actually think through, new architectures, new capabilities, new ways in which to do generative AI or healthcare, or Earth AI, for example, that is built on years or actually research
Starting point is 00:15:34 when you think about it. Earth AI is about taking all are geospatial models that we developed over the years to tackle various problems and take those state-of-the-art problems with a lot of other models that we developed over the years, then leverage generative AI on top of that and enable anybody to ask any question about Earth and planet in plain language and suddenly get the result which actually is based on combination of all these models. Now if you think about it, this is a long-term research that is based on various components that each of them was a pretty long-term research itself, right?
Starting point is 00:16:08 I mean, I work on flood forecasting started in 2017, now we have a global model serving two billion people, two billion people in 150 countries. It took us years of, you know, magic cycle iterations to get there, and now this comes with other models such as storms, you know, weather now casting, population dynamics, etc., along with a gentic layer of AI to actually enable an unlock new opportunities. If you think about this dynamics of this to get to the point that now businesses, organizations can actually use it to solve their problems, it actually was a pretty long cycle, but there were many milestones in between. So I'm a great believer that in many
Starting point is 00:16:52 cases you take a very long-term vision on something that looks very audacious, but then you actually unpack it into tangible milestones. Some of them are research milestones, some of them are product milestones, that actually helps you get into that kind of, you know, what you're trying to get into this mountain that you try to climb. We'll be back with more from Google Research Head Yossi Matias right after this. Shape the future of Enterprise AI with Agency, A-G-N-T-C-Y. Now in open-source Linux Foundation project, agency is leading the way in establishing trusted identity and access management for the Internet of Agents, the collaboration
Starting point is 00:17:33 layer that ensures AI agents can securely discover, connect, and work across any framework. With agency, your organization gains open, standardized tools, and seamless integration, including robust identity management to be able to identify, authenticate, and interact across any platform. Empowering you to deploy multi-agent systems with confidence, join industry leaders like Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat, and 75 plus supporting companies to set the standard for secure, scalable AI infrastructure. Is your enterprise ready for the future of agentic AI? Visit agency.org to explore use cases now.
Starting point is 00:18:15 That's agn-tc-y-dot-O-R-G. Capital One's tech team isn't just talking about multi-agentic AI. They already deployed one. It's called chat concierge and it's simplifying car shopping. Using self-reflection and layered reasoning with live, API checks. It doesn't just help buyers find a car they love. It helps schedule a test drive, get pre-approved for financing, and estimate trade and value. Advanced, intuitive, and deployed. That's how they stack. That's technology at Capital One.
Starting point is 00:18:50 All right, so let's take it on a practical level now. I mean, you're very close to what's happening in generative AI. You're looking at the latest breakthrough research. Where is the next breakthrough coming from? Beautiful thing about research is that it's really exploring, in many cases, exploring the unknown. And one thing that we all need to be very humbled about is that in any given moment we don't know what we don't know. And the exciting thing is to actually explore that terrain. Of course, it's not at random, we don't try to bump into opportunities, we try to be intentional
Starting point is 00:19:24 about it, we try to take some bets. So the most exciting thing are the things that we don't know yet. Now obviously, we want to look into new architectures, we want to do new insights, we want to be inspired by, you know, a lot of what we do is really inspired by the human brain and people and animals and how we see behavior, and we know there are gaps. We know that certain, you know, people or animals can do things much, much more efficient than we can do as humans. This is actually a proof of existence.
Starting point is 00:19:56 So in research, quite often what you do, you first want to understand that if something is possible, and I have yet to see something that is not, to be honest. And then if you know it's possible, the question is, how do I get there? And what are the steps? So I think there's a lot that we're going to uncover that we're not even aware of.
Starting point is 00:20:16 Briefly, do you think the majority of progress in generative AI is going to come from algorithms or just more compute? I think it's going to be combination. Obviously, a lot of the, you know, progress that we've done, we've seen actually. You know, even going back to the early days of, I mean, the new revolution of deep learning
Starting point is 00:20:35 was taking some ideas that were there before and suddenly when you put enough computing power and enough data, suddenly it has a phase transition in terms of utility and what it can do. So it's always a combination. I mean, think about, we discussed earlier about the Seltra's sentence. So a lot of the material and knowledge is there,
Starting point is 00:20:54 but then when you take a big model, You put a 27 billion parameter model out there, and you build on that, suddenly it unlocks new opportunities. When you take MedGEMA and you put some capabilities of medical information, and suddenly you can unlock new opportunities that you don't know. So some of it is about scale. But then there's a layer of reasoning that we have.
Starting point is 00:21:15 For AI-A co-scientists, for example, it's not only about doing the search out there. It's really about applying the kind of reasoning that typically you'd expect researchers to do. which is to form hypothesis, to actually then go through ways of testing them, and then measuring them. Or think about our work on empirical software to help model building.
Starting point is 00:21:38 You know, when a lot in the scientific process, some of the biggest hurdles is really, you have a problem, you want to build a model. You actually have a bunch of models, just testing and see what's the best and then trying to get in the answers. It's very tedious work. With this AI-based empirical software
Starting point is 00:21:54 that can actually build help you select the right model for that, it accelerates the entire. So obviously, this combination of not only stronger models, but more intelligent models with better reasoning and thinking, as well as the power to do that is one approach. On the other hand, algorithmic innovation, anybody who has been long enough in research knows that there are some problems that at some point somebody comes with this innovation that is an aha moment and, oh, I can actually solve it in a way that was previously impossible, think about the transformers.
Starting point is 00:22:29 There are going to be more algorithmic innovations that are going to make breakthroughs. Some of them are already in the work, and I'm really excited when I look into some of the work our teams are doing on algorithmic innovation. I'm excited about what they see from the ecosystem, the academic community, research communities, and other companies.
Starting point is 00:22:48 But I think the best is there to come. So you're the head of Google Research. How do you convince researchers to work something that's not generative AI related? You know, when you ask yourself, what drives researchers? I would say it's a combination of working on interesting problems that, you know, typically when you have a problem that nobody could solve,
Starting point is 00:23:10 that makes it interesting, right? It's a real. It's kind of a Matho-Lympiat kind of challenge. Problems that matter, that could make a difference. And the intersection of finding a problem that is going to be both interesting, exciting from a research point of view, then something that could be applied
Starting point is 00:23:30 and have a big impact is really the motivation. This is the research cycle that I was talking about. This is the motivation for the brightest researchers. The thing is that we have that across the board. I mean, think about just announcements today. Quantum, think about genomics, think about Earth AI. Now, each of them may have some, some of them may have some
Starting point is 00:23:53 strong General TVI component. And General TVI is an amazing technology that also brings up some exciting questions. I mentioned research on factuality, I mentioned inefficiency. But there are so many other disciplines that, and ultimately people are excited to work on things that matter and can actually apply their brilliance
Starting point is 00:24:14 and innovation and have breakthrough research. So we're at no lack of such important problems and opportunities. And again, I'd like to give a shout out to the amazing team at Google Research, brilliant researchers. And when we bring together talents, looking into the different disciplines, bring people who understand languages and health,
Starting point is 00:24:39 and climate, and quantum, and we bring them all together, then a lot of the magic happens. And it's quite amazing to see how people actually also quite often move between disciplines and bring their insights from one to another. So I think, again, the exciting part of being in research today is also the fact that we have the full stack of research. We have AI infrastructures, great models, world-class research products that we can actually be inspired by and then apply to. So this
Starting point is 00:25:14 altogether enables us to actually get really exciting research on many disciplines, anything from machine learning foundations and algorithms into systems, into quantum, into science, into, you know, applying to societal problems. Okay, I got one last one for you before we have to go. The cancer research, one of the cool things about that, if I get it right, was that the actual model went through all these different potential treatments that hadn't been tried yet and actually found one that would work better than the ones that humans had uncovered. Obviously, this technology, generative AI technology, is going to be applied in research all across the board.
Starting point is 00:25:54 Do you anticipate that it's going to lessen the need for researchers, or are we going to have more? Well, we're going to need many more researchers in all disciplines. I mean, think about what's the role of a researcher, it's really to build on what we can and ask the right questions and build for the next one. Now, the only situation where you need less researchers is if you assume that we, practically almost answered all the questions that we need to have. I don't think anybody here in the audience will think that we are only understanding
Starting point is 00:26:24 tiny bit of what we need to understand. In fact, the opportunity that we have with AI to empower researchers is going to give opportunity, not only for more researchers, but for each of them to ask bigger question, move faster on the research agenda, have better results. I mean, think about AlphaFold, which my colleagues were recognized
Starting point is 00:26:46 with Nobel Prize, Dimison John. I mean, we don't have less researchers working on proteins. They actually have many more, right? But now they don't need to work on the protein folding problem. They're actually using it for bigger questions. With AI co-scientist, again, think about the fact that every grad student, every postdoc, have now their own research lab, which
Starting point is 00:27:10 can help them with literature search, looking at a hypothesis. So now they are going to ask bigger questions. They are going to ask the kind of questions. that previously we expected only very senior scientists to do, and we can actually accelerate the kind of scientific process. Similarly in healthcare, similarly in climate, similarly in education. I mean, with AI, there's an opportunity for more teachers
Starting point is 00:27:31 to do more effective work with more students, and again, we're no lack of opportunity to actually have the next generation be educated in a better way. In fact, one of the things that are most important in my opinion opinion is how do we actually empower the next generation because the innovation is going to come from them to unlock many of the other problems. So the way I think about it, we're so early on in our ability to understand science, to understand healthcare, to understand the world.
Starting point is 00:28:04 In a way, for example, in crisis, our North Star is nobody should ever be surprised from a natural disaster coming their way. And by using AI and having the experts using that, we can actually get closer to the that. On healthcare, there's no reason why anybody should be surprised by a disease that is hitting them, right? So there's so much more work to do. And I think about it as AI as an amplifier of human ingenuity. It really empowers the scientists, the healthcare workers, the teachers, the business people in our everyday life. And the more we're making advancements with AI, then the more we can actually expect all these professionals,
Starting point is 00:28:45 to take on bigger missions, to do bigger progress for the benefit of humanity. Makes me really optimistic about our role at research and in technology in general to actually play a role in actually making this amplification of human ingenuity with AI. Yossi, thank you so much. Thank you very much.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.