Big Technology Podcast - The Path Toward AGI, According to Google's DeepMind — With Colin Murdoch

Episode Date: August 30, 2023

Colin Murdoch is the chief business officer at Google DeepMind. He joins Big Technology Podcast for a conversation about artificial general intelligence, discussing why we want to get there at all, an...d what the path looks like. We also discuss DeepMind’s merger with Google Brain, how pursuing the AI business changes Google, and how DeepMind’s AlphaFold AI is revolutionizing the healthcare space. Tune in for a dynamic conversation with one of the world's leading AI executives. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 A top executive at the heart of Google's deep mind efforts to advance artificial intelligence joins us right after this. LinkedIn Presents. Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. We're joined by a very special guest today. Colin Murdoch is here. He's the chief business officer at Google's DeepMind. We're going to talk a little bit about theory.
Starting point is 00:00:28 We're going to talk about practice and how Google is trying to take the cutting edge and artificial intelligence and productize it. Colin, welcome to the show. Alex, it's fantastic to be here. I'm really looking forward to this conversation. Why build artificial general intelligence? I mean, this is something that is a stated goal of deep mind to build AI on par with human intelligence or even something that surpasses it.
Starting point is 00:00:55 Why do it? That's right. Well, I mean, just stepping back for a moment, I think it's really important to think about what artificial general intelligence is or AGI is, because you'll be familiar with a lot of AI systems today. You build an AI system to solve a particular problem. And that works exceptionally well. We've seen huge kind of breakthroughs in fundamental AI research, which has driven really important impact in the world through this form of AI. What we hope, though, with artificial general intelligence is to build a system that can solve multiple different problems. So one AI system that can solve multiple different problems.
Starting point is 00:01:30 And much like us humans, Alex, that means we can take learnings, and the AIGI system can take learnings from one setting and apply it to a new setting. And we expect that, therefore, means that this AI system can create more creative and transformational solutions. And we know this is possible, actually, because humans are a form of artificial, or not artificial, in fact, real general intelligence. And look at the incredible things that we've been able to achieve. And that's why we at DeepMind think it's a really worthy to pursue to develop a form of artificial general intelligence that will help hopefully tackle some of society's biggest challenges, things like climate change and problems and questions in healthcare.
Starting point is 00:02:10 That's actually the core of what we're up to. And it's incredibly important. And I think incredibly interesting. So what do you, what type of research and advancements need to happen in order to get us closer to that point? Well, we are still in early days. But let me give you some examples of what's currently working, what we call generalization. And I can talk about some of the errors of active research we think are going to need for us to get there. So just stepping back for a moment, what I mean by the ability to generalize,
Starting point is 00:02:41 and you'll hear of stuff a lot in this field, is ability to take learnings from one setting and apply them in another setting. And we recently developed, for example, an algorithm called Mu Zero, which was originally developed actually to play the games of chess and guys. And then we realized we were able to take this algorithm and apply it to, I want to say the game of YouTube video compression. Now, it's maybe funny I say that, but we were able to take an algorithm developed for games that was a master in chess and use that to dramatically reduce the bandwidth requirement to stream YouTube videos. We did that by understanding that actually a video is a series of individual pictures. If you imagine the transition between each of those pictures, like a step in a game, that gave us the insight that this algorithm would generalize from.
Starting point is 00:03:25 playing chess and go to YouTube video compression. Of course, generative AI is actually a great example as well. So these new tools where you're able to interact with them in a way that is kind of very conversational and get a really surprisingly and powerful response, that is one way we're beginning to see this move towards more general intelligence. And maybe I could just jump in and tell you a little bit about. about how, for example, that recent work in generative AI is allowing us to move closer to AGI. Yeah, let's do that. That's fascinating. Fantastic. So what's maybe surprising to know,
Starting point is 00:04:08 if you've been following the field in generative AI, you've really seen it burst onto the scene in the last, you know, 18 to 24 months. Is it some of the underlying breakthroughs were developed about, you know, about five years ago? What's happened in the last, you know, 18, 24 months, though, these systems have been really, really scaled up. And by that, I mean, the size of the model, the number of parameters in the model, which contain a model power, has grown dramatically. And these systems have been trained on kind of larger and larger data sets. And what's happened is that by scaling up these models, we've seen these emergent capabilities
Starting point is 00:04:48 appear. And by that I mean, it's not always possible to predict exactly what capabilities will appear. But we've had these new capabilities appear that have demonstrated really powerful generalizability. So you can then take a system that's been trained in this way and you can ask it to summarize a document or write you an email. And it wasn't necessarily trained expressly on these tasks, but it's able to achieve these tasks because of the training process and the scaling up has happened. And that's been, as I'm sure, you've been following and many others have been following. And that's been a massively important breakthrough in the last 18 to 24 months. But these systems are still not complete.
Starting point is 00:05:28 They still get things wrong. They maybe can't plan in the right way. They maybe can't remember what you did yesterday and help you today. So things like memory, the ability to member between episodes, planning, the ability to imagine a whole range of different future scenarios and plan effectively in that setting. Those are two areas of active research that are really important. And at a kind of zoomed out level, another is what we call concepts and transfer learning. So as humans, we're able to build this kind of deep conception understanding.
Starting point is 00:06:00 And that actually forms a kind of really strong foundation for us to take knowledge that we generated in one setting and transfer that to another. So concepts and transfer learning, planning and memory, all really active areas of research, which I think will help us push the next front here. And actually, by the way, we haven't necessarily reached the limit of making these models bigger either. No one's quite sure where that limit is. And so that's also a really important active area of research, just making these things bigger and bigger. Where will that go and where will that land? And what more can we get from there? Yeah, we've had Jan Lacuna on the show.
Starting point is 00:06:35 And I've been speaking with Jan for since 2015, 2016, so seven or eight years on this point about what intelligence is from the eye of an artificial intelligence researcher, and he's always said that it's the ability to predict and to plan. And it is very telling right now that the research now is all about teaching these AIs to predict into plan. And in fact, speaking about the Gemini, the new Gemini model, I'm pretty sure people from DeepMind have talked about. I think I'm going to just cite this, that the algorithm should be better at planning and problem solving. So that seems to be where we're going. So first of all, I'm going to get, you know, I have a few questions for you about Gemini, but just let's talk about it on a broad level.
Starting point is 00:07:17 How do you teach an AI to plan and predict? So there's a whole range of different active research tasks here. And to be clear, there isn't an answer yet, which is why it's still active research. But one of the ways we motivate this research is by making sure we have tasks that require planning. So we spend a lot of time and investment in building a whole suite of different evaluations and tasks, which then provide the target, if you like, for our research and our research programs to focus on. And that's a really interesting definition of intelligence.
Starting point is 00:07:52 And one of the definitions of intelligence that we use at DeepMine and was actually created by one of the founders of Google DeepMine, Shane Legg, is intelligence is the ability to perform well across a range of different tasks. And I really like that definition because it's, I think it's very descriptive and it's very easy to operationalize into a research program. And so the sense of building multiple different evaluations and tasks that provide then a way for us to measure our performance and progress against whether it's planning or adding memory is really central to actually the way we conduct research. And then behind that, it's a creative process. So what you're trying to do is bring
Starting point is 00:08:35 together people from a whole range of different disciplines, from neuroscience, from different areas of our research to come together and have ideas about how we could make progress and then use the incredible engineering talent we have and the computer resources we have to experiment and take steps forward. That's how we do it at a kind of meta level. And DeepMind started largely with some breakthroughs in gaming. So how does that applicable for it? Because I think about predicting and planning and it seems like if you're playing, you know, sophisticated games like Go, then that's basically going to take you in that direction. Yeah, it's a great point.
Starting point is 00:09:15 Because games are a fantastic proving ground for these algorithms. They're fantastic because they're actually hard for humans. They have ability for us to measure how good performance is. There's normally a score of some sort so we can benchmark the algorithm's performance versus the human's performance. And there's a whole raft of different existing games out there that can push and pull the algorithm, AI's capability in different directions. And by the way, you can develop new games.
Starting point is 00:09:45 And I think maybe the third, the third, maybe the final point is that games can run faster than clock time. So you can do many, many iterations in a kind of simulated game much, much faster than you can experiment in the real world, which is why there's such an incredible proving ground and development grounds. You're absolutely right for these algorithms. And we continue to invest deeply in kind of game-like environments for exactly. those reasons. Maybe one other important point, there's also a really useful way of testing
Starting point is 00:10:14 algorithms out to test kind of their limits, and we can check their kind of technical safety to make sure they're doing what we're expecting to do. So they're a nice way of developing and testing an algorithm before they break out into the real world. And maybe a nice example here, actually. We often use this technique of first developing an algorithm in a game, to your point, to do planning. And there's an example here in robotics. If you try and train an algorithm directly on a real robot, it's going to take you a long time
Starting point is 00:10:46 because a robot can take quite some time to complete the task. And in the beginning, it may be all over the place. Maybe like a young child learning to walk. What we do is we create a simulation of that robotic environment, a robotic arm stacking blocks. And we train the algorithm in that simulated environment until it gets good in that simulated environment. And we take that algorithm and then we apply it to the real robot stacking blocks.
Starting point is 00:11:13 We discover it's actually then pretty good out of the blocks. And in the real world, the robot can then begin to build on the training there. So the way that I picture this happening is like it feels like most of the general public has started to get a chance to like start talking with AI via these large language models. So, you know, when I try to conceptualize like what this might look like down the road, I start thinking that like when I'm speaking with a chat GPT or a bard or a Bing, it starts to remember who I am. It starts to be able to accomplish tasks for me.
Starting point is 00:11:46 It starts to be able to help me plan. You know, is that sort of like the next step here? Is that where this research is building toward? That's right. So you're able to converse with these dialogue agents today as you've discovered. And you can have actually quite a meaningful and important conversation. But it might have not remembered what you did last. week, for example. They've got limited kind of context windows, as you may have heard it
Starting point is 00:12:11 called. But what you really want it to remember, as you've noted, what did I do yesterday? What did I last week? What's my preference when it comes to kind of looking at a given film, for example? Because next time I ask to watch a film, and it wants to know what I watched before or maybe what my preferences were. So that ability to remember more about our previous interactions actually becomes really important. I mean, yeah, these things have like the memory of a goldfish. It's like you're talking with it and then five minutes later, it's like, hey, just remind me what you said. It totally forgets. So that's like one step. Yeah, yeah, absolutely right. It's a really important area for us to expand the memory of these systems.
Starting point is 00:12:49 Sometimes we refer to this as episodic memory. So they remember episodes, important episodes in the past. So they're able to bring to bear that important understanding. Then when it comes to planning about the future, these systems need to be able to stop and reason about the right sequence of steps to take. So, for example, I want to plan a holiday. You know, I want to go here, then I want to go here, and I want to go there. And the series of things I want to do may change over time, I may get up in a flight. If you can ask one of these systems today, they can come out with a pretty good response on some of these things, but they aren't able to plan based on what you've done in the past
Starting point is 00:13:34 and what would a reasonable kind of itinerie be that's changing over time. I'm actually going to hold on my family very soon. This is very live on my mind. I don't think the AI system just can really get to a level that I would really want them to at this point. Right. And so we think about where this is going.
Starting point is 00:13:49 We talked a little bit about, you know, being able to predict and plan. And we talked about, you know, we sort of hinted at multimodality, right, like having a model that's generalized, so being able to do text, but also like a human, we can talk, we can read, we can see, we can process. And most of these models have just been text or computer vision or computational. And it does seem like the next step is really going to be bringing them all together. That seems like a massive technological feat. But my understanding is that that is something that's being worked on inside Google with this new Gemini model. I mean, those two descriptions that I just read are, are both Gemini. So talk a little bit about what Gemini is and how it's going to take us on that road. The Gemini is one of our latest research programs. And you're absolutely right. One of the really important errors it's touching on is what we call multi-modality. It's a bit like the human senses you just described. We can kind of use all our human senses together and
Starting point is 00:14:49 combined to achieve the goal where we're setting out to. So it will bring in things like text. We'll bring in things like images that we're able to input those things, but also output both those things. So you might have a question about something you can see. You can share that image and you can also ask a question about it. You may want to then adapt something in that image by saying, please edit this element of the image and it can do that for you. So bringing together these different modalities is something that is a really cool and
Starting point is 00:15:22 important part of that Gemini program, as well as the kind of memory. planning architectures that we discussed earlier. And maybe a final important component is, you know, we're hoping to develop models of different sizes and scales. So there'll be kind of different sizes of these Gemini models, which can then be applied to different use cases depending on what's important. I mean, has there been anything about training Gemini that's, that's surprised you? Or is this kind of like where you think it's supposed to, where, yeah, it should have been
Starting point is 00:15:50 going the whole way? I'm not deep in the Gemini research program myself. But what I would generally share is that it's not Gemini-specific is that when it comes to training these large models, I think people in general have been surprised that as you make these models bigger, they get more capable and they start to demonstrate these capabilities that you wouldn't necessarily have planned or expected. And in the field, this is generally referred to these emergent capabilities. And I'm not sure if we fully got to the end of that process yet. So there's a kind of almost a constant state of surprise as these new capabilities emerge. Right. So I was speaking with some folks at Google and trying to figure out, like, what to ask you about.
Starting point is 00:16:34 And someone brought up talking about modes of training. So I'm curious, I want to ask you about the deep mind approach versus this new approach that's, or maybe not new, but definitely is gaining share in people's minds called constitutional AI. So I'm just going to read you what constitutional and our listeners, is what constitutional AI is from a recent New York Times article and I want to get your take on whether that's the right way to train these models. So it says constitutional AI begins by giving an AI model a written list of principles, a constitution, and instructing it to follow those
Starting point is 00:17:08 principles as closely as possible. A second AI model is then used to evaluate how well the first model follows its constitution and corrects it when necessary. You know, I'm curious what you think about this approach and whether that's something that, you know, Google would consider employing. And if not, why not? This is kind of, I would generally think about this approach and the other approaches like this as a way of ensuring these models are behaving in the way that we want them to behave. And we think about, we do definitely think about that very deep. It's very important to everything we do.
Starting point is 00:17:43 And there are different ways to do that. One way is actually by having an AI system like the one you've described. provide feedback to the model of what you're training about whether it's behaving in the way that the designers would like that system to behave. And that's certainly something, it's all part of the overall approach. Another important way, actually, is that you have humans providing feedback to the model. This is a process called ROLHF that folks might be familiar with, where human races interact with these models and observing the constitution and provide feedback to the model on whether and how well the model is performing against that constitution.
Starting point is 00:18:24 And actually, at the moment, that's a really important part of, I think, the core research process, because humans are actually very good at this. There's a kind of secondary benefit of that is that we are beginning to understand how we can begin to embed more and more human feedback into the model process. So I think in general terms, yeah, this is a really important part of how we approach research to make sure the model is one aligned with the sort of, if you like, constitution that the designers and the society ultimately would like these models to be behaving in accordance with.
Starting point is 00:18:59 Colin Murdoch is here with us. He's the chief business officer at Google DeepMind. When we come back, we're going to talk a little bit about the business side of these models and especially how they're being applied within Google. Back right after this. Hey, everyone. Let me tell you about the Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending.
Starting point is 00:19:20 More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news. Now, they have a daily podcast called The Hustle Daily Show where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them. So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now. And we're back here with Colin Murdoch.
Starting point is 00:19:49 He's the chief business officer at Google's DeepMind. Colin, what is the state of the merger between DeepMind and Google Brain, which just came together to sort of be able to work together in a way that hasn't happened for years? That's right. So we've just formed Google DeepMind from the team that was at DeepMind and the team that was at Brain. And actually, these teams have been working together for quite some time in the background. I think the recent merger, if you like, to form this super unit has come a really important time in the overall development of AI.
Starting point is 00:20:27 We're kind of in this super era or golden era of AI development. And we just thought it was a right time now for that reason, start to bring together the talent, but also the compute and the resources so that we could make sure we were focused and organized in the right way for the next phase. And I've actually been at, well, DeepMind, now Google DeepMind for about nine years. And the pace and the change and the kind of frontier that we're working at means we're constantly needing to refine the way we organize to make sure that it fits where we are in the kind of technology evolution cycle. And, you know, it's going great. I'm really enjoying kind of getting to know an entire new team and we're making good progress. Right. And so it's so interesting because, you know, Deep Mind's areas have been gaming, working on protein folding, which we're about to talk about.
Starting point is 00:21:20 Google Brain, you know, maybe more search-related. So how much of your activities are now going to be focused on the core Google business versus some of this other type of research? I think it was interesting to know, even at DeepMind, and actually this is very close to my role, is that we have for a long time being taking the technology that's been devised. in our fundamental research programs and apply that to Google's products and services. So that's actually been a cool part of both these groups, and actually is now a fundamental part of what we do at Google DeepMine. So we're both advancing the state of the art in the technology, applying that to really big problems in science, and then using those breakthroughs to drive value and impact across
Starting point is 00:22:04 these, you know, what was often billion user products at Google. And that's absolutely right, Alex, it's fundamental to the new setup at Google. team on. So where do you see the bigger business opportunity? Is it going to be, I mean, you're the chief business officer. So is it going to be search? Or we talked a little bit about artificial general intelligence. I mean, you have alpha fold right now that's that's out in market. Like, where is the future of the business on this front? So search is, of course, an incredibly important part of Google's portfolio. I expected to continue to be a very important part of Google's portfolio. So we'll continue to do everything we can to drive value and search.
Starting point is 00:22:42 Let me tell you about how I think about it, because this is a technology transfer process, and that's not easy going from research to real-world impact at any means at all, even when you're operating at Google DeepMine working with Google. I think about it actually is a matching process. So on the one side, we have all this amazing research and these research breakthroughs, and there's a team of people that are developing those. And the other side, we maintain relationships with all the great businesses and business units across the whole of Google, and in fact, the alphabet group as a whole.
Starting point is 00:23:14 So we can deeply understand what's important to them and moving their business forward. So we've got this set of solutions on one side and a set of problems on the other side. And then we try and match these two things together. I sometimes joke is a bit like running a dating service where you're trying to match problems and solutions. So we go ahead and do that. And then we try that out. And if it works, we go ahead and launch. So there's a process there.
Starting point is 00:23:37 That's what I want to share. It's quite important to share. It's quite a systematic process. It's one of matching technology solution and product problem as defined by the business. Search is an important area. We've done a lot of work with YouTube as well. So, for example, I talked about that a bit earlier, we've worked with YouTube to help create better tooling for YouTube shorts, so you can more easily find the videos you want.
Starting point is 00:24:02 We work with YouTube to reduce the bandwidth requirement to watch these videos. We've even worked with internal teams to create. better coding tools so we can kind of really power up all the developers at Google. We've worked with other teams at Google to do things like predict the output from wind farms that Google's part of so we can make more efficient use of energy. So there's a whole range of different applications. And I think that's important to recognize. Yes, search, I think will continue to be really important. And also, I expect us to kind of weave the technology into all parts of so we can really help lift the whole business.
Starting point is 00:24:44 I think that AlphaFold, and maybe we've done it earlier in this conversation, sort of gets talked about as sort of a thing that exists and, you know, okay, it happened and then people move on. But I actually want to hear a little bit about, like, what is going on with AlphaFold? Talk a little bit about the breakthrough itself and how it's being applied right now. Fantastic, yeah. So we did touch on it earlier on, but just as a brief recap, AlphaFold is a way of the determining the structure, the 3D structure of proteins, which ends up being really important
Starting point is 00:25:15 in a whole range of different fields. And this is something that folks have been trying to do for many years, but took years to determine the structure of just one protein. But with AlphaFold, we've gone from years to minutes to even second sometime to determine the structure of a protein. So what does that mean in practice? Well, we've used AlphaFold now to map all 200 million proteins known to science, or two, We've made that available to everyone.
Starting point is 00:25:45 And actually, someone estimated recently that that's probably saved about a billion years of PhD time because, you know, you're probably on average you spend about a PhD to determine the structure of just one protein. That's available to everyone now. We've had hundreds of thousands of biologists and sciences from around the world than are now tapping that database to be able to advance their particular work and their particular domain. Let me tell you some stories about how people are using this. There's actually a team, I think, at the University of Colorado that are using Alpha Phi predictions as part of their work.
Starting point is 00:26:24 So they're focused on the problem of antibiotic resistance. We kind of take antibiotics for granted in most parts of the world, and that's a great thing. But the bacterias are developing, and there is increasing cases. I think in the U.S. alone, there is probably millions of cases a year of antibiotic resistant diseases. And that's an important, also quite a scary problem. So there's a team of scientists working on how to address antibiotic resistance. There's a particular bacteria involved in antibiotic resistance. And they've been trying to determine the proteins on this bacteria for a number of years, a number of years, by some years, but hadn't yet made an advancement. With alpha-fold and the protein structures from alpha-fault, they were able to solve
Starting point is 00:27:10 that particular protein in minutes. They've gone from years to minutes, really unblocking and accelerating that research. I think that alone is an incredible, incredible example of our alpha-fault is impacting in the world. There's another equally important example. There's a group working on developing malaria vaccines, a disease that devastates hundreds of thousands of lives every year. And they've been able to use these alpha-fold structure predictions with different proteins, but still alpha-fold structure predictions to speed up their research into malaria vaccines. So a couple of examples in healthcare. And there's other groups focused on neglected diseases where it may have been too expensive to do this the traditional way. But with alpha-fold
Starting point is 00:27:56 predictions, they're making advancements. A slightly different example, but I think equally important quite cool, is there is a group at the Center for Enzyme Innovation, which I think is at a university here in the UK, which is focused on developing enzymes that can eat the plastics, eat the plastics that clog up our landfills and our oceans. And they've been able to use these protein structure predictions to speed up their work into producing plastic eating enzymes. So we've got, we've got like, and those are just a kind of sampling of the different ways that alpha fold has been used today. It's quite difficult to keep up.
Starting point is 00:28:34 There's a kind of new, new grit almost every week coming up with a way of using these predictions in their work. Yeah, that's wild. So as AlphaFold applications expand, I mean, as Google comes up with more, you know, programs like this, does it change the nature of Google's business? I mean, AlphaFold and Google Search are very different. So talk a little bit about how that fits together.
Starting point is 00:28:59 Yeah, it's a good point. these two things are quite different. And AlphaFold is a good example here. I described some of the ways that is having impact in the world. When I looked at AlphaFold, to your point, I was like, well, how does this work with Google Search? It's not obvious, right? How do these two things knit together?
Starting point is 00:29:19 What's the match there? So it took a step back with my team and thought about other ways we could employ and deploy this. And it seemed, we looked across a range of different areas and business opportunities, by the way, from agriculture to all sorts of areas. But in the end, we concluded that actually there is a great opportunity here in drug discovery. It takes 10 plus years to develop a drug. And then often when it goes into clinical trials, it fails.
Starting point is 00:29:49 There's a very, very high failure rate. So you've spent all that time and money and investment, and it doesn't actually make it through and solve the clinical need you're concerned about. So having understood this kind of scale and importance of the problem, the opportunity. That then gave us impetus to form a new company. So we formed a new company, which is a sister company now to Google D-Mind. It's part of the overall alphabet group. It's called Isomorphic Labs. It's about two years old. And its mission is to use AI to reimagine drug discovery. And the team is making fantastic progress. I'm really excited to see how that work
Starting point is 00:30:26 will help reimagine that whole process. Now, there's definitely more research to do that. That's not just kind of alpha-fault and done. There are kind of alpha-fold scale problems along the way. That's a really good example of where we've been able to set up something new based on a breakthrough like alpha-fold. And I think there could be other advances that come in science that may trigger a similar sort of arrangement.
Starting point is 00:30:50 Let's end with this. We just had a year where people have been going bananas over large language model. I got a question today when I mentioned I was going to be interviewing you. People want to know what is the next model breakthrough that's not an LLM that people aren't paying attention to, but will be as impactful as what we've seen with these models. I'm really excited. I don't know exactly what the breakthrough will be,
Starting point is 00:31:16 but I'm really excited about the union of these LLMs plus reinforcement learning. I'm excited about that because I think there's a lot more to come from reinforcement learning. But I know at Google D-Mine, we have a great deal of expertise in that. So I expect to see the fusion of those two things, create some really powerful and important breakthroughs. Colin Murdoch, thanks so much for joining. You're welcome. Great to be here. All right. Thanks everybody for listening.
Starting point is 00:31:41 Thank you, Nate Gwattany for handling the audio, LinkedIn, for having me as part of your podcast network. And all of you, the listeners, great to have this conversation with Colin here for you. Hope you've enjoyed, and we'll see you next time on Big Technology Podcast. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.