a16z Podcast - a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning

Episode Date: May 1, 2015

"Machine learning is to big data as human learning is to life experience," says Christopher Nguyen, the co-founder and CEO of big data intelligence company Adatao. Sure, but then, what IS bi...g data? (especially as it's become a buzzword that captures so many things)... On this episode of the a16z Podcast, Nguyen puts on his former computer science professor hat to describe 'big data' in relation to 'machine learning' -- as well as what comes next with 'deep learning'. Finally, the former Google exec shares how Hadoop and Spark evolved from the efforts of companies dealing with massive amounts of real-time information; what we need to make machine learning a property of every application (why would we even want to?); and how we can make all this intelligence accessible to everyone. ––– The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

Transcript
Discussion (0)
Starting point is 00:00:00 The content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. For more details, please see A16Z.com slash disclosures. Hi, everyone. Welcome to the A6NZ podcast. This is Zonal, and I'm here today with Christopher Nguyen from Adaitau, which is a big data company, and its mission is to democratize data intelligence and help people close. across the enterprise. And the best way to describe him is as an entrepreneurial scientist. He got his Ph.D. from Stanford in Device Physics. He's a former Google executive. And as a professor, he started the computer engineering program at the Hong Kong University of Science and Technology. He's basically an entrepreneurial scientist who's merged the worlds of academia and doing a lot of startups. So welcome, Christopher. Oh, thank you, Sono. So actually, Christopher, the way we want to just kick this off is, I actually just want to talk to you starting with big data. I mean, that's a term
Starting point is 00:00:57 that people throw around all the time. It's completely overloaded. It's buzzword. It means so many things to so many different people. Could you start by just sort of telling me what your definition and take on big data is? There are two ways you can think of the term big data. There's what I think most of the world thinks about when people talk about big data. They think of the Vs, start out as three Vs, volume, variety, velocity, and so on. And then I think it's now up to seven or eight different Vs, veracity, variants, and so on. I actually don't. like that definition. I think that definition is functionally correct, but it focuses on the problems of big data, right? These are the challenges that you have to deal with when you deal
Starting point is 00:01:39 with big data. But the definition skips or misses the part where it says you ask the question, why do you want to deal with these problems? Right. So it turns out the reason for big data is machine learning. So the reason for big data is machine learning, that's actually kind of counterintuitive because I've actually heard it the other way around, that big data. data exists because of machine learning. So I like something that Peter Norvig, the director of research at Google, said when he referred to big data, he says big data is not just quantitatively different, but it's qualitatively different.
Starting point is 00:02:10 In other words, there's something that happens when you have enough data. It crosses a certain threshold. So, for example, if you want to learn whether it hurts to hit your head against a brick wall, about five samples is probably very big data. If you want to learn how to classify images on the Internet, maybe two million samples is not big enough. So it's not so much a matter of how much data you have, but how much is enough to learn from, right? And when companies like Google, you know, I would say sort of one of the original big data companies, when they started their life, the very first batch of data they dealt with was big data.
Starting point is 00:02:49 So the term big data does not exist in these companies. And they've always learned to take advantage of this data. to make a lot of decisions. So, well, the way I've heard it is that big data, that machine learning is one of many uses for big data. Right. But you're basically arguing for something different. Can you describe what that is and why?
Starting point is 00:03:07 Sure. So if you think about the V's definition of big data, they're all problematic, right? And so we tend not to want to have problems unless there's a reason for, there's a greater benefit to pay that cost, right? And the benefit of big data is really, because we can unleash algorithms at them.
Starting point is 00:03:26 And these algorithms can automatically detect patterns and see these patterns. I want to sort of jump into that right away because a lot of us in machine learning say this all the time. What does it mean to detect patterns and so on and sort of people take that for granted, but then it's a little fuzzy.
Starting point is 00:03:42 And the way I think about big data is when machines learn from big data is very much like human beings learn from life experiences. That's actually interesting. I would actually want to hear more about why you make that analogy. So you're basically saying that machine learning is the way humans learn from life experiences. Do you mean like the way a kid learns to navigate the world for the first time?
Starting point is 00:04:03 That's exactly right. For example, let's flip that around and imagine, would you like to have a child develop without any experiences? And, you know, after 20 years, what would that child, that person be like? And then why is it that we ascribe wisdom generally to older people than younger people? You know, our brain capacity essentially remains about constant after a certain age, 16, 18, 20, whatever research you read. And yet wisdom continues to grow and accumulate. And that's, as the brain incorporates life experiences, it is taking in a lot of big data, just like what machine learning algorithms do with data. And as the opposite of that is sort of rule-based computing, right, or rule-based expert systems, you can come up with 10.
Starting point is 00:04:50 20, 30 rules and so on. But you can never come up with enough rules to handle the exceptions. Yep, exactly. So is machine learning for the exception handling then, or for everything, how does that work when you're talking about computing? In a very real sense, it is, you can think of it as for exception handling. But I like to think of it in terms of analogy as wisdom. You do have the rules, but then you know when the rules don't apply. And the reason you know when the rules don't apply is because you've seen three or four or five corner cases before. Somehow, quote, unquote, intuitively, you find that in this situation, that rule doesn't apply. Well, what we think of as intuition are actually, you can think of as parameters inside a machine learning model. So that's
Starting point is 00:05:34 interesting, but how does, just to be more concrete about that, I mean, that makes a lot of sense, logically, but concretely for businesses, like when you think about the business intelligence space and where we've been and where we are now, like, what, what, What's different here? Like, what's sort of happening that? What do we get out of it, basically? I guess I'm asking. Right.
Starting point is 00:05:52 That's a great question. Even the term business intelligence, right? Sometimes, you know, we're captured by what we meant in the past. And so what we said in the past was B.I. B.I. being business intelligence. Exactly. Business intelligence can, you know, can be self-limiting. In other words, what business intelligence was was limited by what was available.
Starting point is 00:06:13 So what was available was the ability to essentially look backward. You can ask a lot of questions, what we call aggregations. Aggregations, okay. You have a whole bunch of transactions that come in from all over the world, and you can say, well, how much revenue did we make yesterday from that particular region of the world? And these are sort of backward-looking information because that's all we were capable of doing.
Starting point is 00:06:37 And because there was a particular lack of something, and that something was big data. With enough data from all of that experiences, What we can do is we can build a model out of that and sort of project into the future. Okay. And so you can think of business intelligence going forward as the ability to apply machine learning algorithm to big data and not just look at past questions, but also future questions. Or asking to predict the unknowns from the knowns.
Starting point is 00:07:08 So what's changed to make that possible? Because in the days of business intelligence, I think of stuff, you know, the products at SAP, and similar companies put out what's changed to make big data possible. I know the big obvious things are just more computing power, but more concretely, like, what's physically making this possible to be able to parse and get these insights out of this data? Right. Yeah, if you think about it, a lot of people have pointed out that big data has always existed, right?
Starting point is 00:07:35 It's always been there. We just didn't collect it. And then the second insight that I think about is that we don't necessarily get smarter over time. It's just that certain technologies get cheaper. They become more available. So machine learning algorithms have always been around. The data that exists that you could collect has always been around. But it wasn't until the advent of things like the Hadoop Project. And the launch of companies like Lauderra and MAPR back in 2009.
Starting point is 00:08:05 And it made it affordable for many, many more companies to begin acquiring and storing a lot of this data. I'm actually glad you brought up Hadoop, Christopher, because one of the things that I see a lot in reading about the big data space is a lot of myths and misconceptions around what Hadoop is, what Spark is, because now we talk a lot about Apache Spark, and we have a lot of, at A6 and Z full disclosure, we have investments in every level of the badass, the big Berkeley data analytics stack coming out of the Amplab. Can you talk to us a little bit more about what exactly Hadoop does and what Spark does and how they all live together and then how that actually fits into big data for people who don't
Starting point is 00:08:42 actually crunch those numbers behind the scenes? Sure. I think we can look at it from two perspectives. I think there's a top-down view and there's the bottom-up view. Let me start with the bottom-up view, because that's how technologies always develop. We always build things from the bottom-up, and then we realize there's a pattern here, and then we look top-down again. From the bottom-up view, Hadoop is primarily a storage layer. There's the HDFS, the Hadoop file system, and the distinction between that particular file systems and other file systems in the past, I think the essential difference is that it is highly parallel.
Starting point is 00:09:19 So it's a parallel in terms of parallel processing. Yeah, parallel with storage, replication, and so on, so that you can have a lot of resiliency. And then it also is capable of running on commodity hardware. So for the first time, people can afford to buy terabytes of storage, right, and store it reliably and still, you know, pay only a little amount for that. And so just sorry, just to take a step back for a moment, the reason Hadoop and its ilk were able to run on commodity hardware is because the hardware
Starting point is 00:09:49 has gotten cheap enough or because the way that it processes and the way it's architected, it's optimized for that. Like what's sort of the, where does it? I mean, it could be the same effect in the end, but I do think it's important to understand what the driver of that is. Yeah, I think it's both. It's sort of a supply and demand thing, right? Sometimes the demand create supply or, you know, sometimes the supply creates the demand. So, you know, I think you can trace back, again, to companies like Google that started in the late 90s and early 2000. And that started to use a lot of this commodity hardware. And then also with Moore's law, making everything cheaper. You essentially doubling the capacity that you can afford
Starting point is 00:10:29 every 18 months. So with that, and then with companies that have taken this down this path, proving that there is something valuable about accumulating all this data and making decisions from it. So all that sort of intuition as well as the actual economics of hardware prices going down and the availability of open source projects. I think all these elements come together to essentially create the big data movement. So where does Spark fit into that? So Spark, if you go, continuing with this bottom up view, if you start from the storage level,
Starting point is 00:11:04 And you know that storage is not enough, right? You can just store things. You're not going to get insights out of just collecting them. Exactly. Interestingly, lots of databases implementation in companies. People actually do put data in and never get anything out. Right. So in any computing stack, you need more than just storage.
Starting point is 00:11:23 So you need a compute layer. So the first layer that you're describing is a big data layer. That's how you're describing big data as like storage. That's exactly right. Okay. So I think right now people think about big data is actually getting insights and analytics out of it, but you're actually saying big data is just getting the big data, that many signals and saving them in a certain place.
Starting point is 00:11:41 For the purpose of being precise, I'm going to slice this up into levels so that we can refer to them more accurately. So at the bottom layer, we've got this big data. And then above that, we need big compute in order to process all of this big data. So the storage layer, the processing layer, and what is big compute? So the first example of big compute you can think of is MapReduce. And MapReduce, I don't mean in terms of the algorithm, but I mean the actual implementation with the Hadoop project, the Hadoop MapReduce.
Starting point is 00:12:11 So that's a parallelized computing system that can take all this data, do some computation with it, and then put it back, and then maybe an aggregation. For example, asking the same question, the example that I gave earlier, how much money did we make off of this widget out of Europe yesterday,
Starting point is 00:12:27 is an aggregation question. And if you have a thousand such transactions, you can do it with one machine. but if you have somehow stored 100 billion of these rows and you want to ask the same question, maybe you have to parallelize it. And that's what MapReduce allows you to do. Unfortunately, MapReduce is actually not designed originally to handle queries.
Starting point is 00:12:47 First of all, you only have two functions, map and reduce. So is that the reason or is it because... Actually, the reason is a little deeper and sort of more pragmatic than that. Interestingly, a lot of people may not realize that map reduce was designed to be slow. Okay, that is interesting. I didn't know that. Let me unpack that a little bit. So MapReduce, as implemented at Google, by Jeff Dean and Sanjay Gamowat back in the early 2000, and then they published the work in 2004, that MapReduce engine at Google was intended to do one particular job. And that job was to crawl and index the web.
Starting point is 00:13:18 And when that happens, you know, so Google's approach was to parallelize it over thousands of machines. And so when you have thousands of commodity machines doing a task that may last half a day, The probability of one of those machines going down is approaching one, right? In fact, it is about one. Any single machine could go down. And when a machine goes down, the question comes up, do we start the job over? And certainly, you don't want to have to do that because then it'll never finish. So it's designed such a way that if any single machine goes down,
Starting point is 00:13:50 another machine can be brought up and sort of pick up where it started. And hence, it's slow enough to be able to do that. Right. And the way you ensure that reliability is to write down everything. every step of the way. Right. If you do job A, and then you write down the result of job A, and then you do job B, write down the result of job B.
Starting point is 00:14:08 What does Spark do differently? Spark takes a different approach. And as I said earlier, it's not that we get smarter. It's just the constraints have changed. Spark's goal is to be able to do a lot of these queries very, very fast. And we've always known, right, independent of the economics of hardware and software, we know that the speed of access to RAM is a lot faster than accessing disk. In fact, from CPU to RAM, you're talking about 40 nanosecond delay.
Starting point is 00:14:40 Sorry, just to be clear, when you say the speed of accessing RAM is a lot faster than accessing to disk, you're just talking about how to get to the memory functions. Generally, the machinery knows that we'll feel the speed. Getting to information stored in RAM is about six orders of magnitude faster than getting the information stored on disk. So Spark's approach is to use memory. Now, Spark, if it was created five, six years before its time, would have completely failed because memory was so much more expensive.
Starting point is 00:15:13 Right, so the hardware constraints were lifted there. That's right. Exactly. And if it's a few years after, then, of course, something else would have come in before Spark. So the timing of Spark has a lot to do with its success. But what Spark does for you is give you very fast query processing that the implementation, the map-redued implementation of Hadoop doesn't give you. So that helps us kind of understand a little bit more of the difference between Hadoop and Spark.
Starting point is 00:15:38 And you're basically talking about the in-memory aspect of it, being able to do things a lot faster. That's right. And what does that give us concretely for big data and machine learning? Right. And that takes me to the top-down view, right? Which is from the top-down, we know that we always want things fast, but we also want them cheap. Sorry, just to be cantankerous here for a second, why do we want things to be fast, actually? Why do we always want them to be fast? Like, what do we actually get out of it? Fast is competitiveness. If you can get your answer five minutes before I can, right? You make decisions and then you make that purchase, you make that buy, the supply, whatever it is, that'll happen before I get there and you win.
Starting point is 00:16:13 Sometimes it's implicitly obvious that we want everything faster, right? Because fast is competitive. But it turns out the difference being fast and slow is very, very critical. When you can get something in real time or you can get something in five seconds as opposed to five minutes, you will actually change your workflow. You will actually do something. That's what I learned from the consumer perspective with things like Gmail and so on. We had a phrase we called the five second barrier.
Starting point is 00:16:39 And if the user can't get something done within five seconds, seconds. They won't ever do it. It's not like they'll do it twice the latency. So fast enables new different use cases that may otherwise not happen. Okay. So that's actually helpful because I think we tend to take it for granted that fast is better. I mean, I know obviously we want that information faster, but you're basically talking about enabling entirely new workflows and use cases. So going back to what you were saying about the top-down approach and where this fits. From the top-down perspective, now we have this big compute layer and then we have this big data layer
Starting point is 00:17:15 so we have the capabilities of actually doing things very fast on massive amounts of data we can apply algorithms we can bring all these algorithms to bear on big data and get a lot of insight but that's still not enough
Starting point is 00:17:28 because we haven't put the human in this place yet right it's still machines and that's a problem with the bottom up approach when you look at the top down there's a human sitting at the command and control or the insight layer and we've got to make decisions. So far, our industry has not built the bridge from all of this machinery to that human user.
Starting point is 00:17:48 So there's a layer missing. The learning, basically. The learning as well as the application layer, the interfaces, the user experiences. And so all of that put together can be thought of as the big apps layer. So I'd like to think of things in terms of big apps on top of big compute, on top of big data. And when you have these three working effectively harmoniously together, then you have a very, very good big data stack. So, you know, taking a step back for a moment, I mean, it seems obvious why the natural interfaces are pretty important for people to be able to interact with it. I mean, let's face it. Like, the reason we are able to do any kind of computing is because of the GUI, like having a graphical user interface that allows us to not have to see the plumbing behind the scenes. That seems pretty obvious that we need that. So what does that actually get you in the big data world? I mean, sure, you can more easily read your data and get some insights from it. But I just feel like we throw that term around too much that we need a better interface to our data. Like, what does it really get us?
Starting point is 00:18:41 So I think the way to, one way to understand it is to look back into the past, right? We went from the typewriter to the computer keyboard and to the mouse and now to touch screens and so on. You could ask the same question, what does touchscreens get us, right? And why did we do it before? The reason touch screens and finger gestures and so on are valuable is because they're much more natural than using a keyboard. But the reason we didn't have that before is because the hardware and the software to make that happen was not available or too expensive to do so. So the same analogy applies with big data machine learning.
Starting point is 00:19:16 We could imagine all those capabilities before, but they were too expensive. We didn't have the storage capabilities for all of the data, and we didn't have the big compute capacity to do all of this. But now that we do, when they're affordable, what you will see is that all of this machine learning will be a property of every application. What does that mean?
Starting point is 00:19:35 And I've heard Peter say that as well. He makes the argument as well that machine learning will be a property of every application as opposed to a standalone isolated function? Like, what does that actually mean? Imagine a world where, let's say, you know, we work with a lot of people, and we expect our colleagues to remember what we say and learn from the interactions and so on and so forth. Can you imagine a world where your colleagues are just simple automaton and they don't understand
Starting point is 00:20:05 what you're saying, and, you know, you told them something, and they've been. don't remember it the next day and their actions don't change the result of that. Well, I claim that there will be a day very soon when then you will feel that about the machines you work with. In other words, you would expect that to be a property of all these machines. I mean, you're right. I think we already do expect it because we carry our mobile phones with us all around and it's frustrating when you have a certain experience there that you can't have with an application you're using at work on your desktop or anything else. So I definitely think you're right that we might already even be there in some ways or that we need to be expecting that.
Starting point is 00:20:42 But what does that really give us? I mean, because when I think about big data, I think about it in the abstract. So it's still not clear to me what machine learning being a property of every application. So what does that do for us? Let me give you an example by a story from one of the, there's something called TGIF at Google, right? Which actually we do at our company, a data out today as well, which is every Friday, The execs basically come out and talk about almost every company secret possible to the whole company, and people can ask any kind of question that they want.
Starting point is 00:21:14 I remember there was one time when at Google we were dealing with the problem of latency. Google cares a lot about speed. So Larry was pushing everyone to make their services a lot faster. And there was a question people asked and say, Hey, Larry, we went from one second search delay to 500 milliseconds and 300 milliseconds. 100 milliseconds. What do you want? I mean, what happens when we get to zero? And then what Larry said was, why stop at zero? Why can't it be negative latency? And essentially what he meant was, why can't our machines anticipate what we need, what we want to do? Right. I mean, that's actually
Starting point is 00:21:51 not as ridiculous as it may sound, right? Certainly as human beings, we do anticipate each other, right? Maybe if you see that I'm coughing or something, then you just go, you know, help me with a cup of water, right? Right now, I still have to tell the machine. even if we had a robot today to do that, I have this to ask that robot to do that. Right, you just specify it. So what we get with predictive algorithms, what we get with machine learning,
Starting point is 00:22:13 what we get with big data. Remember, big data is just life experiences, right? What we get with that is that our machines will be able to learn, right? They will be able to anticipate, they will be able to predict. They will have behaviors that we normally expect of humans, of intelligent beings.
Starting point is 00:22:28 And when we take that to every application, though, because why is it not okay to have it being isolated? standalone thing, like what do we get out of it when it becomes a part of every application? I think when it becomes part of every application, then every component of the application will be receiving data all the time, right? Maybe the screen is receiving my gestures, maybe my calendar is receiving appointments that I'm making, maybe even the location where I am at, and then they will be able to learn from all of this and make intelligent decisions about what calendar events to insert, what gestures
Starting point is 00:23:04 to accept, and maybe I don't want me to have to say that. It'll just do that ahead of time for me. So in my view, in that world, things will happen a lot better for me, right? It'll become a lot easier for me to move around. It'll become a lot easier for me to make decisions. And maybe a lot of decisions will also be suggested to me before I even have to think about it too much. A lot of what you're talking about is machines inferring and really aiding, you know, learning like humans and helping augmenting human intelligence. What happens next? I think that's a great question. I think you back up and think about human evolution,
Starting point is 00:23:38 there's one variable that's been inexorably increasing. We may get taller, shorter, we may go from one continent than the next and so on. But one thing that's been a single variable, that's constant or changing one direction, that's human intelligence, right? In fact, the species intelligence, there's absolutely no reason to think that we're at the end of that. I think we're just at the very beginning of that increasing intelligence. A lot of the things that we're learning about machine learning itself, I'm really excited about that, right?
Starting point is 00:24:12 If you look at the research in deep learning, what's happening there, really just in the last 12 months, 24 months, to me, the exciting thing is that we're learning so much about how our brains might work. It's not just what the machines can do, but what they teach us about ourselves. And so if you think about it from that perspective and think about how these algorithms are evolving, you actually see this very near future where human intelligence is going to be boosted
Starting point is 00:24:37 by all of this machine intelligence. That will actually change how we think about evolution. It's interesting, as people treat deep learning sometimes as discreet from machine learning, but you're basically putting out in the same continuum and saying it's just more machine learning? Yeah, and I think deep learning just happened to be one moniker of today, but it is a very important one
Starting point is 00:24:56 because it's showing some of us glimpses of the future. more so than at any time in the past. And I think that's the exciting thing. And what we're doing coming back is the software that we're building is essentially machine intelligence, aiding human intelligence. Today, I would say, in very primitive ways. I think it's very helpful to enterprise, but we're just at the beginning of it.
Starting point is 00:25:21 In the next set of products, we're going to be building in deep learning capabilities. We already have machines inside the company that can talk to each other. It's happening a lot faster than people are realizing. And I think I see it as our job to make sure that we as human species continue to leverage that power as opposed to maybe one day be subjugated by it. No, totally. Well, I mean, just one last question then. Concretely, what do we get out of it? I mean, it's interesting academically.
Starting point is 00:25:49 And clearly it's interesting beyond academically because companies are investing in it left and right. In fact, more so in the corporate sphere than even in the university sphere. But what do we get out of that deep learning? Like what concretely comes out of that? I'd like to think of it in two ways. And I think they're both concrete, but perhaps one is more concrete than the other to some people's views. Certainly, companies are helped when they have more intelligence about their data. You know, people talk about in the past, you didn't even know what was going on in the company,
Starting point is 00:26:20 let alone make a decision based out of it. We're coming to an age where you know what's going on, and the machines are also helping you make decisions, right? And so what you get out of it is competitiveness. Companies that invest in this and are good at this that are data intelligent, that are data driven, will win. That's a competitive edge. That's inevitable.
Starting point is 00:26:39 But I think the larger picture also is that as a species, we're explorers, right? It's built into our genes, and you can count on that as being inevitable, right? Left alone, we'll figure out that these are exciting frontiers that we will explore. We will always want to build intelligence. We will always want to build images of ourselves, if you will. Maybe that intelligent that emerges would not be the same as human intelligence, but we will attempt all of this.
Starting point is 00:27:07 And it's just like space exploration, right? This is exploration of the mind. Using the computer. That's great. Well, thank you, Christopher from Adetow. And that's another episode of the A6 and Z podcast. Thanks, everyone.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.