The Changelog: Software Development, Open Source - Celebrating Practical AI turning 100!! 🎉 (Interview)

Episode Date: August 21, 2020

We're so excited to see Chris and Daniel take this show to 100 episodes, and that's exactly why we're rebroadcasting Practical AI #100 here on The Changelog. They've had so many great guests and discu...ssions about everything from AGI to GPUs to AI for good. In this episode, we circle back to the beginning when Jerod and I joined the first episode to help kick off the podcast. We discuss how our perspectives have changed over time, what it has been like to host an AI podcast, and what the future of AI might look like. (GIVEAWAY!)

Transcript
Discussion (0)
Starting point is 00:00:00 Well, I think AGI is a lot of people just think of some type of singularity and Terminator incident where like now we cease to exist because AI has crushed us into the ground and we're no more sort of thing, which I think is to me inconceivable. But, you know, whatever. He's so bullish on being inconceivable, though. Like he's dead set. He's he's sure of it. Challenge him on it, Adam. Challenge him. I still know man like i just
Starting point is 00:00:27 yeah i don't know if i agree with that i'm on the inside but just my own opinion right it's at least um i think inconceivable that it would be sometime in our near future or my lifetime but i to me it's inconceivable but it depends so much though on how you're defining that which i think is important when we define what agi is because you can define it a couple of different ways. And I'll give you completely different answers about what my opinion is. Bandwidth for ChangeLog is provided by Fastly. Learn more at fastly.com. We move fast and fix things here at ChangeLog because of Rollbar. Check them out at rollbar.com and we're hosted on Linode cloud servers. Head to linode.com slash changelog. What up, friends? You might not be aware, but we've been partnering with Linode
Starting point is 00:01:13 since 2016. That's a long time ago. Way back when we first launched our open source platform that you now see at changelog.com, Linode was there to help us, and we are so grateful. Fast forward several years now, and Linode is still in our corner, behind the scenes helping us to ensure we're running on the very best cloud infrastructure out there. We trust Linode. They keep it fast, and they keep it simple. Check them out at linode.com slash changelog. What's up? Welcome back, everyone.
Starting point is 00:02:00 This is the ChangeLog, a podcast featuring the hackers, the leaders, and the innovators in the world of software. I'm Adam Stachowiak, Editor-in-Chief here at ChangeLog. Today, we're celebrating 100 episodes of Practically I. We invited Daniel Whitenack and Chris Benson, the hosts of our show called Practically I. If you haven't heard of it, you should listen to it. And we are so excited about 100 episodes that we invited them on this show. Technically, they invited us on their show. Well, technically, we crashed their party, so a lot of technically is there, but we crashed their 100-episode party, and we're rebroadcasting that episode right here on The Change Law for you to listen to.
Starting point is 00:02:31 Oh, and by the way, we have a giveaway as part of this episode. It doesn't end until September 1st. You have time, but check for a link in the show notes for details, and good luck on winning. Well, welcome to another episode of Practical AI. This is Daniel Whitenack. I am a data scientist with SIL International, and I'm joined as always by my co-host, Chris Benson, who is a principal AI strategist at Lockheed Martin. How are you doing, Chris? I am doing very well. I think it is a lovely day to talk about AI. Oh, it is. It is a beautiful day to talk about AI. And I was thinking we've
Starting point is 00:03:11 kind of done a lot of these practical AI episodes, haven't we? I think we have. I think we're actually hitting a milestone possibly at some point here. Hey, what's up? Hey, who is that? Wait, did we not lock down our Zoom? Oh. Who is this? Who has the password? Surprise, guys. We're here.
Starting point is 00:03:29 Ah. What's going on? What's going on? Oh, my gosh. Busting in on this party. Yeah. Oh, it's Jared Santos and Adam Sokoviak of The Change Log, who are kind of our bosses, you know, Dan?
Starting point is 00:03:41 Oh, gosh. Please don't call us that. Are we in trouble? No, you gotta be careful of what we say. What's going on? Well, we're here to say happy 100th episode, guys. Yes. What a milestone.
Starting point is 00:03:53 It is. Can you believe we've done that many episodes? Yeah. A hundred episodes talking about AI. Who would have thought there'd be that much to talk about? True. Well, podcasting is hard. Doing a hundred episodes is harder. So seriously,
Starting point is 00:04:06 congratulations for sticking it out. And not just sticking it out, but like thriving. Congratulations. Thank you very much. Yeah, it's been a lot of fun. It's been a long time. So listeners who've been listening all the way since the beginning may recall that Adam and Jared actually interviewed Daniel and me in episode number one as we got going. And they finally come back. They've been here all along, but now they're back. Yeah. We're back, baby.
Starting point is 00:04:30 So the team is all in line together. We've gone full circle. We're silent behind the scenes. Yeah, yeah. You're always around. But what was the date of that first episode? 2018, July 2nd. So it's been a little over a couple years what's funny though is the
Starting point is 00:04:48 recorded date is april 20th 2018 so i guess we took a little while to ship that first one's the hardest is that accurate jared do you think yeah probably wow what would happen there maybe we just didn't know how to tag them correctly at that point in time that could be the case as well i don't know well there was a lot of stuff figure out, like how to do the intro, what music to use, etc., etc. Mm-hmm. Yeah. Plus, we were also launching
Starting point is 00:05:10 other things at the time. Yeah. True. Regardless. And we had to figure out if this whole AI thing was just a fad or not, you know? That's true.
Starting point is 00:05:19 Well, so what's the consensus? Is it a fad? Is it over? Oh, it's definitely a fad. Definitely a fad. Yeah. Yeah, it's it's definitely a fad. Definitely a fad. Yeah. Yeah. It's going away.
Starting point is 00:05:28 Yeah. I think it's sticking around. I think, you know, I don't know one episode that we talked about this and maybe I mentioned it a couple of times. I'm kind of thinking of it at this point, like another layer in the software stack. So just like, you know, developers don't necessarily have to be like into all the weeds of DevOps and like CICD and that sort of thing. But if you're going to be a developer these days, like you're going to sort of interface with DevOps and CICD at some point. So I'm kind of started viewing it that way, where if you're a developer these days, if you're not an ai practitioner you're gonna sort of interface with ai related something at some point what are like
Starting point is 00:06:10 the most bucketized or verticalized aspects of ai that are like in products today i know there's computer vision there's natural language processing sentiment analysis like what are the ones that people are using where it's like yeah this is kind of a little niche inside ai this little niche or a product that people are actually putting into play yeah well those those are certainly the biggest two right there and in terms of commercialization you know showing up all over the place there's definitely a lot of areas even of computer vision so i would say like certain things like facial recognition or object recognition are fairly commoditized in terms of like major, major categories of detection and that sort of thing. So my wife and I just bought a building for her business and there's no internet there
Starting point is 00:06:56 yet. So we bought these like little deer cam things like trail cams that like take snap a picture of something walks by. That's my security system right now. And they have an option. It's like AI integrated. Like if a deer walks through the like zone of the camera, it'll identify that it's a deer and like ping your phone
Starting point is 00:07:14 versus like a turkey or I don't know, other things that people shoot, I guess. They're great. I use those to catch lost dogs around here because, you know, we do all this little animal stuff. Ah, interesting. That's a better use maybe. Yeah. I hate to burst burst your bubble daniel but deer are not dangerous man you don't need to protect yourself from deer yeah i've even so far caught other things that i wish wish wouldn't
Starting point is 00:07:35 have been in our building but that's another story i'm just envisioning the deer coming in from the skylight in the tom cruise fashion, you know? Yeah. Done. Maybe so. Yeah. Oh, yeah. I should mention my side project then. Yeah, go for it. Please do. So I live in a cul-de-sac and I've got some cameras outside my house. So I've fed all my neighbors' faces into a little system
Starting point is 00:07:57 and then I can detect whenever there are people in my cameras that are not those people. Yeah, exactly. Interesting. I'm kidding. I want that. I can't make that, but. Yeah, exactly. Interesting. I'm kidding. I want that. I can't make that, but that would be cool. Interesting.
Starting point is 00:08:08 Oh, you could totally make that. Do your neighbors know you fed their faces into a system? Jared's like, dang, dude, you're awesome. I thought you were saying you had done it, and I was like, I'm not telling them. Keep it quiet. I'm on a cul-de-sac too, but I don't think any of them are listening to this.
Starting point is 00:08:21 Wouldn't that be so cool though? That would be pretty cool. That would be cool. Would you have to get your neighbors' permission? I mean, it's not that hard to build. Yeah, you should go for it. It doesn't take that many photos to do it well. So I'm just looking at, for example, like, I mean, Google Cloud, AWS, like all these platforms have different things. So they have like categories like in Google Cloud, they have site, language, conversation, structured data and and cloud AutoML.
Starting point is 00:08:46 So site, like these object detection sort of things. Language, of course, there's translation is, you know, very much used these days. But also like transcription, speech recognition. And then there's like conversation type stuff like, you know, chatbots and you're saying sentiment analysis right so those are all definitely pre-built solutions for all of those things out there for sure can you tell that daniel is a natural language processing expert you know he just whipped those things right off the top of his head like it wasn't even he didn't have to think about it let me flip this question on its head then what are some untapped areas or some emergent areas
Starting point is 00:09:27 where people aren't quite putting this into play yet, but it's going to be big once it reaches developer, the masses? Yeah, first thing that comes to me is adversarial networks, which I know we recently had an episode on, but there are so many uses for adversarial situations and GANs, generative adversarial situations and uh you know gans generative adversarial networks gans so what are some uses of gans that we could look forward to deep fakes deep yeah that's the most famous aren't those here though are they're just not that easy
Starting point is 00:09:55 to do i think what chris is meaning is good applications of gans probably i was going for good applications but you went you went right to it Jared. You went right to the nefarious thing there. And we're certainly seeing that. There's some cool things, though. People are using these generative networks to generate almost in a data augmentation sort of way. So particularly like, for example, in healthcare, if you need to generate a bunch of example, like medical imagery of tumors to help you train other models that discriminate between cancer and non-cancer, it's hard to get your hands on that data because it's all, you know, very private, there's simple laws, all those things. So if you have a small amount of it, and then use one of these networks to sort of augment your creation of other data,
Starting point is 00:10:43 that can really be a benefit. I'll also say, you know, just because I'm always the language related person, you know, that even though language and conversation is on this sort of list on the major platforms, you know, almost like 98% of languages of the world have no support. So I think also, you know, as the benefits of AI extend to emerging markets in the developing world, there's going to be like new applications or new challenges in language, but also there's going to be other new challenges that are probably hard to anticipate because most of AI development has been fairly Anglo centric. So when you're solving problems in other contexts, um, you know, I think there's cool stuff going on in like
Starting point is 00:11:31 agriculture. Um, like even though like we're a lot of times when you think of AI in the U S we might think of like driverless cars or some like cool tech thing like that. But like for a lot of the world, like agriculture is a big deal. So like cool new stuff in agriculture or other areas as AI kind of gets hopefully democratized and the benefits of it extend to more than just, you know, the US and Europe. Is there anything like a, like a Hippocratic oath for those in AI? Because I'm thinking like, again, can be both good and evil, right? So is there anything moral compass when you think, okay, I'm going to be a practitioner or an expert or, you know, somebody doing these things, using these for good. Like I can see that tumor detection, totally a good reason to manufacture, for lack of better terms,
Starting point is 00:12:19 data to support medical, you know, research and whatnot, given the privacy of medical records and individuals and things like that. But is there anything out there like a Hippocratic Oath for AI? There is a rapidly developing entire professional field that is addressing that, which is commonly called AI ethics or responsible AI is another term for it. There are several terms that lend themselves to it and they're tied into other terms like explainable AI. And so, yeah, there is definitely the recognition, you know, going back, Adam, to your determination to capture all your neighbor's images and enter them into the iPhone. Yeah, I will do that with their knowledge.
Starting point is 00:13:00 I will literally go to them with my iPhone and say, this is from my neural network to detect to detect you and not the in quotes bad guys or bad people that come into our cul-de-sac like so some realness behind that our neighbor down the street like we haven't we live in a decent neighborhood and our neighbor down the street got their tires stolen two days ago wow wow legit like he went out and his truck was on blocks. That's disturbing. Yeah. You know what I mean? Like we have these things happening and I think it's just a result of the economy. Honestly, I don't think that people are generally that bad. I think that people get desperate and these are things that happen in desperate times and the economy is definitely suffering. So you have people that are willing to do things that maybe they're not more willing to do otherwise. But I'm not saying thieves don't exist any of the day, but I think it might be a normal
Starting point is 00:13:48 occurrence or a more common occurrence. And I live in Houston, so it's a pretty well-populated city. Yeah, you know, if you kind of take that condition and you put it in the context of AI, we don't have safeguards. So we have this field that's developing called AI ethics, but across the world, we really have very little law or regulation that people are required to follow. So for the most part, it is other laws that are not specific to this that have applicability. You know, GDPR is one thing that comes to mind in Europe, and that is really the only, you know, national level law that I can think of off the top of my hand. Am I forgetting something, Daniel?
Starting point is 00:14:30 I know California has some laws. Yeah, there's some other regional things. And I think probably there's some others that exist now. But I think in general, people are still reliant on developing good, like, principle statements and how that trickles down to the actual development workflow like google and microsoft ibm all these companies have developed their sort of ai principles and that may or may not trickle down to the actual development work also i think there's still a perception that like any sort of like when developers hear governance or like you know that sort of thing
Starting point is 00:15:06 it's just an assumption that it's going to slow down all work until like we can't do anything and so there's still like that feeling i think exists there although people would acknowledge that it's like there's some important problems to address you know what's funny i was on a call where i was having a conversation with some folks from the World Economic Forum about a week ago. And we were talking about this a little bit. And there's so many companies putting out so many principles that it's becoming cluttered and stuff. of AI ethics and maybe this is a moment where before you go create your own, you do a little soul searching in your organization and go pick one of the 100 great examples that are already out there just to make it a little bit easier for people to keep track and stuff. So, I mean, it's definitely a growing field, but it has a long way to go to mature. It's funny, Daniel, I was listening to your guys' AI ethics episode, which was just recently, and I very much identified with you when you said that,
Starting point is 00:16:08 because Chris, you light up on this topic. I can tell you're into this. This is where you play, thinking at this level. And Daniel feels, as a listener and a producer of the show, I think more like boots on the ground,
Starting point is 00:16:20 slapping the code against the data kind of a person. And he's like, when I hear governance or I hear these things like he kind of resonated with what he just said. And I was with him. I'm like, my eyes rolled, they glaze over. I'm just like, oh, here we go again.
Starting point is 00:16:34 Yeah. And I think it's what people don't realize, though, is and I don't think it's immediately obvious that in O'Reilly, I forget. I forget who wrote the post, maybe I can find it and put it in the show notes. But they wrote a article about like, doing good data science, quote, unquote, like ethical data science actually, like allows you to do better data science. And I think that that like, is generally true in the sense that, you know, part of the whole like governance aspect is like understanding like what data produce what AI model that produce which predictions at what time.
Starting point is 00:17:15 Right. And if you actually know those things, your development workflow can be sort of supercharged because you're not duplicating your effort. You know what you did in the past. You can do some, you know, analysis to figure out like, you know, what parameter space you've explored and what like issue popped up when you did what. And so there is like a lot of benefit there. And I think that's why certain of these practical tools plug in. Like we had an episode with Allegro AI recently, and they have open source project called Trains, which helps you track a lot of these things.
Starting point is 00:17:47 And there's other projects like Packaderm, which is an open source project, helps track some of these things. So I think there is developing some tooling around it. It's still not like super streamlined. So there is like still when you when you're, you know, on the ground doing the development, like it's not just like there's no road bump when you try to integrate these things. Everyone has their own approach to it and their own solutions.
Starting point is 00:18:13 There's no standardization yet. We're way before that. It's in TensorFlow, but once again, that team has done, and I haven't used it, but presumably a really good job, but it's their thing that their community is for so it'll be interesting to see how this rolls out in the years to come. It's kind of like a code of conduct, similar concept
Starting point is 00:18:31 right like you need one, you should have one. If you don't enforce it, it's useless. If you don't have the tooling to help you enforce it, then it's difficult to enforce. They're hard to write so you usually start with somebody else's. But if you don't internalize that somehow,
Starting point is 00:18:46 then you just cargo-culted somebody else's values and they're not yours. And so there's lots of things there. I'm curious how the state of explainability has moved since you guys started the podcast, because that's one big aspect of this. The scary, hard thing about machine learning and whatnot is the fact that you're basically putting data into a black box,
Starting point is 00:19:04 training a model, and then out comes your response. Whereas there's no, they're not necessarily a policy that created that response. It was a bag of bias or not bias or whatever you put in. With algorithms, we can write an algorithm and you can go back and say, who wrote this? And you're like, I wrote that. Why'd you write that? Well, because my boss told me, why'd he do that? Oh, here's the policy. This is a bad policy, right? And it had these bad effects with these things. It's non-deterministic. You're like, I don't know how we got this result. It just happened. But as you talked about, there's progress on explainability, which is great, both as an end user. Why are you showing me this ad? Daniel, you interviewed Darwin AI recently. What was your takeaway?
Starting point is 00:19:43 Yeah, that's definitely a good episode to listen to. There's some things happening there in terms of explainability, but I think that there has been some progress. I am hesitant to say that I feel like there's been a huge amount of progress. There's still a lot of open challenges and open questions. There is more like organized information now, though, I think I'm thinking of there's a book which you can actually read online. It's called Interpretable Machine Learning by Christoph Molnar. And you can just read through the entire book. And he talks about like all sorts of things from, you know, counterfactual explanations and adversarial examples. And, you know, we just had a previous show on adversarial examples. And so there is like gradually more tooling and
Starting point is 00:20:33 organized information out there. But I don't think there's a consensus on this subject either, like how to approach it. But there's just like a series of tools in the toolbox that you can use and maybe not. You know, people are still developing those and still actively researching them. And it depends, too, a little bit on the like type of data and model and such. So when certain things happen, like, you know, there's like a model that enhances an image and you put in like, you know, Barack Obama's image and then the enhanced image turns out like he's a white guy. And there's like, this like blows up on Twitter and stuff that motivates
Starting point is 00:21:10 a lot of like, in computer vision, I think they've been struggling with this for quite some time. In other areas, like natural language processing, it's probably a little bit newer. When dealing with application performance, you ask questions like which endpoint is triggering memory increases? Are slow requests to a particular endpoint cascading to the app? Why is this endpoint slow for only a handful of users? Has this performance pattern occurred before? What changed since the most recent deploy? Those are just a few questions
Starting point is 00:21:55 easily answered by Scout APM. Scout is application monitoring that continually tracks down M plus one database queries, sources of memory bloat, performance abnormalities, and a ton more. Thousands of engineers trust Scout APM to help them uncover performance issues you can't see in the charts. Learn more and get started for free at scoutapm.com slash changelog. No credit cards required. Again, scoutapm.com slash changelog. So let's talk about what has changed since that 2018 April to July timeframe,
Starting point is 00:22:46 episode one through episode 100. I've been listening, I've been producing alongside. So I do know one thing, I'll cue you up, Chris, because I'd love for you to say this again, or say it to me. One thing you've seen and you've said, which I'm not quite sure I know what you mean by it, is that you think that we're moving into a post deep learning era, or not post, but beyond deep learning era of AI. Can you say what you mean by that and explain it to me? Yeah. And it's actually on my mind a lot these days. It's been on my mind today before we started talking about this, you know, before we did the show, because I'm having conversations with other people about the same topic and to set it up. If you look at these last couple of years and when and when we started the show, we were still in kind of the ramp up stage, we're deep learning. I mean, there was stuff just coming out every day,
Starting point is 00:23:31 because everybody was finally setting their focus to it, it was being funded. And we were getting some pretty amazing stuff coming out on a day to day or weekly basis. And in some of those early shows, you know, Daniel and I, we would struggle to figure out which news items to include in the show and which ones to just not mention because there just wasn't the time. So it was very exciting, very Wild West, you never know what's going to happen next kind of moment. We've matured since then. And so it's definitely, we kind of went through the kind of the rise of computer vision and all its various things.
Starting point is 00:24:05 And for a while, there was a new algorithm every week coming out in that space. And then we transitioned into the NLP period where we had tremendous progress there. And Daniel has been right in the middle of that. I've learned so much from him. But we're also, we've matured quite a long way in a fairly short amount of time. And one of the things I'm noticing is we're still having things like, I mean, GPT three just came out recently and that was a big deal,
Starting point is 00:24:29 but I don't see the, the like every week things happening. And so because we're seeing kind of an evolutionary progress, there's a lot of people in this space that are starting to say, have we mined this for all the, the big new things or new things, or maybe most of them? I don't think we're there yet. But are we getting to a point where we've had some maturity and we're having lots of kind of just evolutionary improvements in the current space of deep
Starting point is 00:24:56 learning? And because of that, a lot of people are turning back to kind of AGI, you know, which is artificial general intelligence, the idea of the AI that can kind of do multiple complex things instead of this simple AI that we have, the narrow AI that is very good at doing one particular task. And so there's talk of something that if you had said this to me two years ago, when we started, I would have laughed. I would have, you know, like, there's no way. And that is the idea of an AI winter at some point down the road. I don't think it's quite that because it may be that the research being in a more, in a slightly more mature area is focusing on lots of evolutionary things, but we probably have quite a ways to go before AGI. So I think we might have a lot of commercialization over the next few years about what's already out here, because there's tons of industries that can use what's already been created and discovered. But how far are we from the next major step up revolutionary wise, because I think what a lot of people mean by that doesn't like it doesn't mean what they think it means.
Starting point is 00:26:12 What does it mean and what do they think it means? of like some type of singularity and like terminator incident where like now we cease to exist because ai has like crushed us into the ground and we're no more sort of thing which i think is to me inconceivable but okay whatever he's so bullish on being inconceivable though like he's dead set he's he's sure of it challenge him on it adam challenge him i just don't know man like i just yeah I don't know if I agree with that. I'm on the inside, but this is my own opinion. Right. It's at least, I think, inconceivable that it would be sometime in our near future or my lifetime. But to me, it's inconceivable. But it depends so much, though, on how you're defining that, which I
Starting point is 00:27:02 think is important when we define what AGI is, because you can define it a couple of different ways. And I'll give you completely different answers about what my opinion is on those. And I think this comes back. We had a conversation about the NeurIPS, which is a big AI research conference. And in the keynote, they were talking about AGI. And they were talking about it in much more terms. I could grasp like certain things, like the architectures that we've been seeing in, in natural language processing that involve
Starting point is 00:27:30 attention and self attention, and models that actually go beyond sort of fitting parameters, but actually paying attention to certain pieces of data more than other pieces of data. And they tied that into saying, okay, well, that's a, like, that's an advancement to generalization. And I can like latch on to that, that seems like some interesting steps there. And they tied that into saying, okay, well, that's a, like, that's an advancement to generalization. And I can like latch on to that. That seems like some interesting steps there. And so, yeah, I think it does depend on how you define it. And to Chris's point as well, I think that, you know, people are going to start trying new sorts of things. And I just looked up Sasha Rush. We had him on the podcast from Hugging Face a couple of weeks ago. And he was one of the organizers at iClear, which is another one of the huge AI research conferences.
Starting point is 00:28:11 And he posted a graph like he graphed over the last two years. So while we've had the podcast, the keyword growth for certain topics in research over that 2018 to 2020. And I was actually somewhat shocked. So the top growth was a thing called graph neural networks, which is a new way of constructing neural networks to work on graph structured data. And so I think that that's an indication that people are like, they've pushed the sort of architectures that people are used to pretty far. And they're searching kind of for something new and exploring new areas and new like types
Starting point is 00:28:53 of structured data, new types of data, maybe multimodal types of data where multiple different types of data are linked. So that's really what I think of when I think of people like pushing the boundaries of it. Yeah, but it's deep learning specifically. If you think of AI as a broader collection of technologies that are advanced, you know, these are more deep learning things. And I think some of the other luminaries in this field talking more about what AGI means to them and, you know, what they think about that and stuff. And then you kind of have the whole commercialization thing going. So there's a little bit of divergence in some areas between the research community and the commercial interest, the commercial community. Because we're seeing deep learning models and architectures deployed in many different areas.
Starting point is 00:29:40 But that is separate from what leading researchers at Google Brain or Open AI or something like that would be focusing on. So you're seeing people kind of cash in on where we're at, which is good. And I think we'll see it everywhere. I mean, just pervasively, but that's different from where things are going in the future. And, you know, it also, going back to the AGI point, one of the things I think that I've learned in this two-year period is I think I was unclear on the idea that if you were to look into this somewhere down the road future about AGI, it comes about, and obviously we're a long way from that. But if at some point, I think people confuse whether or not consciousness is part of AGI or not. And so I think we'll get to tremendous
Starting point is 00:30:23 levels of intelligence that has no sense of self or consciousness many years before any breakthroughs on the ladder on consciousness. So we're already seeing models that in very specific areas if you're defining intelligence as a form of computation, being able to solve a problem, then we're much farther down that pure computational road than we are. You can have an amazing model that can outperform any human, but that doesn't mean that it's thinking, wow, I wish I had a cup of coffee right now. For sure. That's a big leap right there. Self-awareness is a key ingredient missing. I'm not sure that's clear in most people's minds, because if I have five different conversations on this topic with five different people, their idea of what they're starting from,
Starting point is 00:31:15 all five can be very, very different. Yeah, and maybe a good way to put it in one of the reasons why it's inconceivable to me is because like the way that we're approaching this path to advancement in AI like we're going to destroy the planet way before an AI singularity like happens there was a study it takes like we release into the atmosphere as much carbon as five cars during their entire lifetime when we train one of these state-of-the-art NLP models just once. So I think if we were to say, oh, let's just like keep rolling with this,
Starting point is 00:31:52 like we're gonna have to be living on Mars by the times that the AI singularity happens. So that's like a whole nother strain of things where it's like, can we actually solve some type of the computational issues that are happening with AI? And we're going to have to like, we can't just keep doing things status quo. We have to figure out like more efficient and like creative ways of doing what we're
Starting point is 00:32:13 wanting to do, I think. You know, to your point there, when we do get to that point where the singularity does come about and it's self-aware, it's going to start off depressed as hell. You know, we'll have destroyed the planet. It'll be like, why did I bother? It'll turn itself off. Why did I wake up to this? I'll just turn itself off. I'm way too, I'm using way too much energy here. And has anyone noticed we have a princess bride thing going here? We keep saying the word inconceivable. That's true. And we were saying, I don't think that thing means what you think
Starting point is 00:32:42 that thing means. I was hoping that someone would do some other reference after I said that, but I wasn't. There is a Princess Bride theme going in this episode. Anybody want a peanut? No, that was too non sequitur. Let's talk about GPT-3 for a minute just to completely, it's not actually a hard shift because I think it plays into this concept that you're referring to where it's like a plateauing and a question that I have because what happens with us who aren't deep in the field like you all are is every once in a while we just get impressed with the results of some sort of new technique or model and it has been a while
Starting point is 00:33:14 so i've definitely seen the slowing for me where it's like i feel like i've seen a lot of the things and it's like yep i've seen that before that's cool like different applications i think the color transformation stuff is really cool right with style transfer yeah super cool like the oldify yeah exactly so you're just seeing like well let's take that and apply it to x y and z and then every once in a while you get super impressed and gpt3 just made a splash this summer on kind of like the tech twitter and the vc twitter and like in our space by generating some pretty realistic and even tricky sentences and phrases and blog posts even.
Starting point is 00:33:51 And I don't know what GPT-1 or GPT-2 were. I don't know what GPT-3 is. I know it's open AI. I know it's the first time I've signed up for an AI-related beta. I'm like, I want to play with this. It's that. But is GPT-3 kind of like
Starting point is 00:34:07 the last or the next phase is it an evolution of what they've been doing is it a brand new thing tell us your guys's take on that thing Daniel's our NLP expert so I'm gonna throw this to him so I think it's a an amazing achievement and like incredible results and also like people just being really creative in their use of it i don't think it's like a fundamental paradigm shift in terms of like how we're doing modeling necessarily and one of the reasons that they listed in not releasing the model publicly was the fact that like it requires an extreme amount of like compute to make it run like efficiently and good, which people just don't have access to like normal people, maybe like us on the call. I don't know if I want to call us normal, but speak for yourself and open AI people. It's like other than everyday people.
Starting point is 00:34:58 Yeah, it is an amazing achievement, both computationally and scale wise and, you know, what they had to do to achieve that scale and all of that. But I think it is in a sort of paradigm shift in the way are training on huge amounts of data using sort of multitasks or arbitrary tasks to help the model learn about language, like filling in words or question answering or sentence completion or character completion or predicting next sentence and these sort of tasks that people don't really care about that much but are used to train these like large models that are then able to be fine-tuned for very many tasks that may be unrelated like you know uh translating code from one programming language to another or you know uh generating like uh understanding queries to
Starting point is 00:36:07 generate you know front-end components and that sort of thing i know those are a couple things that were shared in the changelog slack so yeah yeah could you maybe hypothesize about what's after this first magic trick so you mean for gpt3 exactly like you know so that i say that's the first magic trick to sort of like introduce this new you know generation of language prediction model and what it can do like it's very impressive that that that blog post was generated and then i think like on some ethical level like do i trust that blog post less because it wasn't actually generated by a human so why do i just trust humans so much more than let's say anything else that's an aside but i think like okay this is the initial magic trick
Starting point is 00:36:50 what are the actual applications of it what's what's beyond this blog post for gpt3 yeah for me it'll i'm mostly interested in how people go about like interacting with this model because the standard has kind of been in the past that you know people train a model and then at some point like a serialized version of that model is released and you can load it into your own code and do like nifty things with it that's you know not going to happen in this case for reasons that were on purpose that they are releasing this via API and they're running it internally. They can shut it off when they want to. There's interaction patterns that are governed by that API, which aren't just kind of you can do whatever you want. And so I think it'll be interesting to see like, you know, how that
Starting point is 00:37:41 influences how people are using it obviously as you've already seen there's a lot of creative uses already but i think that i know why you know these reasons why they did this and you know that's within open ai's set of ai principles right but it also somewhat constrains or puts some boundaries i think on, on how you're going to use it. And so I'm mostly actually interested not so much in the application because I think it'll be things similar to what we've seen in the past, maybe just leveled up a notch, but more how people figure out ways to use the API or develop workarounds or creative uses and that sort of thing. So the workflow-wise, it's very different. Back to the inconceivable part, though,
Starting point is 00:38:33 this shows some level of fear from humans to non-human in this case, like the machine, so to speak. There is some sort of fear. Maybe it's a fear of how other humans will use this. Yeah, that's it. But the governance and the API and the restrictions definitely shows either responsibility or fear. I'm not sure. Maybe it's both. Well, it's a tool, right? And any tool can be used for evil or good.
Starting point is 00:38:56 Yeah. And we've had this conversation a bunch of times on the show in that like every technology, people always take technologies and most people use them for good ends. And there's always a handful of actors that use it for bad ends. And the worry here, obviously, is especially if you combine this, you know, like we're talking about GPT-3. And first, it's been done. So, you know, people know, even if they don't have the details, they now have seen it. It will eventually get out there, whether it comes from its point of origination or whether it comes from some other group that says, oh, I think I could do that as well.
Starting point is 00:39:32 I've seen the results. I have a pretty good sense. That's possible down the road. And we have seen that with previous models coming out and then other people kind of piling on. But if you combine that with, obviously, other aspects like gendered adversarial networks, and we're talking about deep fakes and stuff like that, then there is room for bad actors to do stuff. And so that is a concern. So, you know, when you're detecting that, I think it's less
Starting point is 00:39:53 about, you know, it's not that we're worried that we're about to hit the singularity and, you know, what do we do about these machines? I think it's more a question about what do we do when the criminal mind out there or something decides that it's commoditized to the level they can use it effectively and they they decide to deploy it and we have to we have it's part of the process we got to sort that out i mean that fear element to me it comes down to a couple different pieces it's one that like this is trained on so much data in a sort of programmatic way that like when you scrape 80 million websites, like you don't always know like what's going to happen if you train a model on that and what your output is going to be.
Starting point is 00:40:32 So there is this element of like, how do you probe all of the unexpected things that could be output from this model? That's a hard piece. The other thing is just like, I think, you know, yeah, any tool can be used for good or bad. But like, if you think of different tools, like, like, we can all go get a hammer from the hardware store, right? And I can choose to like, you know, hang a picture on my wall with that hammer, I can choose to hit someone over the head with a hammer, but everyone can go get a hammer. Don't do that. Like that's, that's like democratized. I can go get a hammer. I can go get something else to hit you with. Right. Um, but with,
Starting point is 00:41:09 with these sort of AI technologies, if like, for example, China is using these sort of computer vision technologies to detect a person's ethnicity from a street camera based on their walking gait to determine if you know they're part of a minority population called Uyghurs and so that they can track them and put them in concentration camps and those Uyghurs they can't go out to some store and get that same technology like there's such an imbalance with this technology and who has access to it and who has the compute, who has the money to run the compute, who has the facility to get the data, that the imbalance of power is really emphasized for these technologies in particular, I think. And there's a good point there when you talk about scale is that, you know, to your point there, it's a powerful tool
Starting point is 00:42:03 that is powerful enough to where nation states are very, very interested in this. And they put a lot of money and effort into this. And that just amplifies how things can go off the rails. It's one thing if I get a mean streak in me and I go out and use a tool to go do something. But when people do it at that level, it's one of the challenges of our time. So, I mean, I think it's not unique to AI. It's any tooling, but it's going really fast in general.
Starting point is 00:42:33 If you look at the fact that over just a decade, we've made profound advancements in this area that are usable. So this is part of society. This is part of culture. We have to get solutions to this and it's not going to stop anytime soon. So rather than be afraid of it, we just need to be focused on answering it effectively. We need great minds to put their attention on it.
Starting point is 00:43:01 What up, nerds? Adam Stachowiak here, editor-in-chief of ChangeLog. We're beta testing a membership program around the ChangeLog and our other podcasts. And we think it would be really valuable to you and the whole community. We call it ChangeLog++. And it's the best way to directly support this show and all the podcasts we produce here. The videos, the tweets, and the other stuff we create here at ChangeLog. We have big ambitions for this, but we're experimenting for now to make sure there's interest.
Starting point is 00:43:26 So when you sign up today, you make the ads disappear, you get the ChangeLog and all the shows you love, just no ads. And I guess that means this part you're listening to right now, well, it'll be gone.
Starting point is 00:43:36 We also have some extended episodes planned, some bonus content, some merch store discounts, a lot of fun ideas. And since it's such early days, we're offering this membership at a 40% discount for early adopters. And that's you. That discount though expires at the end of August. So head to changelog.com slash plus plus to join today, lock in that discount, get closer
Starting point is 00:43:57 to the metal and make the ads disappear. Again, that's changelog.com slash plus plus. We'd love to have you supporting us as a member. so y'all have done a hundred episodes and i've been curious to ask you this for a while which is that when we set out to do this show we called it practical ai and that first word has really been kind of a primary focus of the show and maybe a guiding light to a certain degree. I remember hearing Daniel oftentimes saying, it is Practical AI, and he'll use that as a way of kind of turning the conversation
Starting point is 00:44:55 into the practical aspects of deploying and using and et cetera, these technologies. But I'm wondering if you felt like that's limited you or made the show go in directions that you haven't wanted to or if there's any sort of maybe an inkling of like a regret of being pigeonholed into the practical ai podcast no okay no regret dang it i don't know i guess you could interpret practical as being like practitioner, like, you know, in the sense of like tutorials and implementation tooling sort of things. But we have gone into things like ethics or like the use of AI for good or telling stories more so than just like highlighting methods, I guess. And to me,
Starting point is 00:45:46 that still fits within like the practicalities because it's, you know, the same reason why you want to see like case studies or whatever, when you're looking at a particular tool or product, you know, something like that, where like you want to get a sense of how people are using a thing. You want to get a sense of how people are thinking about a thing that maybe you haven't thought about as much. So to me, I don't necessarily feel pigeonholed. I think we have gone into certain of those things, but, you know, it's kept us away from to me.
Starting point is 00:46:18 It's like brought a bit of focus so that we're not always, you know, talking about Terminator and those things and i'll actually to be honest and since you asked i hadn't thought of it that way but i'm going to actually say occasionally just to give a different perspective i would imagine that anyone who's listened to us for a while has has heard me if between the two of us i tend to be the one that gets out there into the speculative realm uh more often. And I'm the downer that always is like, I don't know how to implement that from my terminal. So I'm out.
Starting point is 00:46:51 I'm ready to step by step directions. Daniel's like, if I can't deploy it to Kubernetes, it doesn't count. You know, I definitely have an interest in looking out there into what is essentially speculative, you know, even philosophical realms. Part of that is probably not because of my job, because that's not what we do at my company, but just as a person myself in the defense world. And if I go give talks, you know, Daniel and I have talked about all these talks we've given over the last few years, and I commonly am asked about these speculative things because people make the
Starting point is 00:47:25 assumption incorrectly that being in defense and being doing AI that I must be thinking about terminators all the time. So it's something I do think about that idea. It's not real life. It's not real life in the defense industry or the DoD. But I will confess that I can't help wondering about a future like that and putting some thought into it. So it occasionally creeps out in the podcast. And I think Daniel does the right thing. He quickly, he goes, well, it is practical AI. It's good to have the balance. Exactly.
Starting point is 00:47:54 And he pulls me back to reality on that. As you can tell, I probably tell from 100 episodes that I'm not, maybe not the speculative philosophical type. Right. He whips me into shape immediately, yes. I do struggle in those like planning strategy meetings at certain points in like organizations where it's like, well, how does this fit with our vision? How does this like, you know,
Starting point is 00:48:19 what are our transformational things that are gonna happen in the world based on the stuff and yeah so it's good for me to be pushed into those areas it's definitely good for me to put be pushed into areas where i'm looking like beyond like my my vim window but um you know i also always like to you know go back there i kind of see the name as more of a less of a restriction and more of a a north star rather than like this is the direction you're going you can go into the fringes to because it's always good to entertain possibility but maybe a better way to define it might be what you think practical actually means. Like I think practical means possible and useful.
Starting point is 00:49:05 I do too. What's the lure to the word practical for you both? I think that we keep it for the most part, and even me, despite my confession a moment ago, I think this is real life. So when we were starting, AI was still very young and cool and most organizations didn't have that as a capability in-house. And that's still developing, but right now it's no longer the thing where it's the
Starting point is 00:49:34 strictly aspirational intent. It's now something that a lot of organizations are incorporating, and they have really practical problems to deal with. Like, okay, well, we now know how to produce, to address an architecture and produce a model that can be deployed. And how do I get that into our deployment methodology? How do we get it out to our customers? How do we do that? And I think that's where the bulk, that's where 99% of AI is and should be, I think, and for most people, if you're not strictly a researcher. And so I think my sense is that we've given a platform for that, for people to talk about it and to come on the show and to help others. And I think one of the things that might
Starting point is 00:50:16 have set us apart is that we're always thinking about, is this going to be meaningful for listeners who are trying to do it themselves out there? And I think that's where Daniel's, is this going to be meaningful for listeners who are trying to do it themselves out there? And I think that's where Daniel's, but this is Practical AI drawing us back on the line is really important because it's fine to dream a little, it's fine to speculate a bit, but for the most part, people are trying to figure it out and get work done. And I think that we actually help them get there. Yeah. I think useful and meaningful are two words that have been mentioned in the past couple of minutes that are, to me, important. So there are useful and meaningful things that are not code or implemented on GitHub,
Starting point is 00:50:58 right? You know, there's things that influence our workflow and the problems that we solve that are you know still useful and meaningful but not like talking about a framework or specific open source project or something like that so yeah i agree with that so over the course of a hundred episodes surely there's been highlights and lowlights there's been successes and struggles we didn't prep you guys like bring your favorite episode or anything like that but if you could just think back and what have been some of the struggles of the podcast and what have been some of the successes for y'all you want to go first daniel
Starting point is 00:51:34 yeah sure put me on the spot i did let's see how i did that i think one of the successes for me is the sort of um and we're always like I think we can always do better in getting a sort of variety of guests. And we're always like trying to cover different things that we haven't covered. But I do think that we've had guests on sort of a range of like from students, just like getting into AI and giving their perspective all the way up to like, you know, people like like Stuart Russell and others that have been like luminaries in the field. So I think it's really cool that we've got that diversity of perspectives and, you know, a whole lot of different topics covered.
Starting point is 00:52:18 So looking back, I think that's one of the things that I'm I'm pleased with is that kind of variety of perspectives that we've brought, all kind of bringing their unique spin on whatever they're talking about. Also, I don't know if this was specifically a thing that we set out to do when we started. I don't think it was, but even from the very start, I forget, it was one of the first episodes where we talked about a TensorFlow project that was helping African farmers with this app where they took pictures of their cassava plants and, you know, it helped them. It was really early. Identified as Eve. That was like one of the first episodes I forget. Like ever since that point, it seems like we've always kind of had in the back of our
Starting point is 00:53:05 mind, like specifically highlighting AI for good episodes. So I think those are really the ones that stand out for me. There's a one with, you know, representing a project from Data Robot where they were working, trying to detect like waterline issues. Also in Africa, there were recent ones about like COVID related data and question answering systems related to that, or like getting scientific knowledge out there to scientific researchers about COVID. So those AI for good ones are really some of the ones that stand out to me. Yeah, I think I would agree. And I think going back to what you were saying before,
Starting point is 00:53:44 some of these folks that have come on are just luminaries in the field. You know, you mentioned Stuart Russell a minute ago. Wojtek Zaremba. Oh, yeah. Yeah, a lot of different people. Bill Dally from NVIDIA. You know, we had Anima and Anand Kumar.
Starting point is 00:53:58 I've probably just messed up her name and I apologize for that. On the show from NVIDIA. And just, I mean, just these people that have really wowed me and to some degree been heroes for us. But at the same time, then we bring on people that have never been out there at all, but they have great ideas and they have great insight and they're hungry and they're doing
Starting point is 00:54:16 really amazing things. And it's been a platform for them to come out and share that. And so I think it's been, from my standpoint, a really good balance that we've struck in terms of not just going with superstar people or just going, you know, whatever, but being able to get all those different perspectives into the show. I think that's good for the AI community. I wish that there were many channels within the AI community to do that. There's going back to actually one of the things that I admire from another changelog show, which is GoTime, for those who don't know, that's for the Go programming language, is the Go community has really rallied around that podcast. And I love that sense of community. And I think that's been one of our aspirations here is to give the AI community a place to rally, to be real, to have this chat like what we're doing today and what we've been doing all these episodes.
Starting point is 00:55:07 I think I really would like to see us be more successful in the next 100 episodes for trying to bring people on board and make it a conversation and recognize that AI is just now part of life like so many other things and that there's room for every voice involved. We've seen great success there, especially with JS Party and GoTime in terms of community representation, diverse voices, diverse perspectives, polite disagreements. We love that. I mean, we think that's a welcome place for good dialogue like that and conversation.
Starting point is 00:55:38 And those two shows are great examples of us iterating towards that, in quotes, greatness. I don't want to call us great, Jared, but we're doing a great job there. Those shows are very representative of two diverse spectrums in software. JavaScript, Go, of course, and all the ways those two languages and communities tend to go out to do what they do. And I'm happy that both of you are inspired by that, impressed by that and desire that too. Yeah, definitely. And I think one of the cool things that I've seen is there are people that are active on our Slack channel, on LinkedIn, on Twitter, and are interacting with
Starting point is 00:56:16 both Chris and I, you know, a good bit. And like, even today I was getting, you know, show suggestions or like, hey, I was looking for a show about this, but it doesn't look like you covered that yet. And so all of that's really cool. I think there's been a lot of things throughout the episodes where people have suggested something. We've we've tried it or we've we've covered a gap that people wanted to talk about. And so one of the things that I maybe didn't quite expect was that sort of ongoing continual, like throughout the week outside of the podcast conversation with a lot of people out there that aren't guests on the show, but are involved in the community. And that's really cool to see like people, you know, having discussions that are useful for them, even outside of just listening to the podcast on these different, you know, having discussions that are useful for them, even outside of just listening to the
Starting point is 00:57:05 podcast on these different, you know, channels. The reach, I think, you know, Daniel and I came into this not having ever done this before. And, you know, Adam, you and Jared have been doing this for years and you're, and you're real pros at it. I think it was very surprising to me when, you know, it wasn't a big deal kind of thing, but like when I joined my current employer and I'd meet some people and they'd say, oh, I really enjoy your podcast, by the way, just wanted to let you know that as we're meeting. And it was a quiet thing and no big deal, but it was like, wow, the reach is really, you know, really there.
Starting point is 00:57:38 And the conversation that that creates is really there. And I've ended up away from the podcast, having a lot of conversations about topics that we address, because you'll meet people out there and they just want it. They're like, by the way, you did this episode on such and on this topic. And I'm, it just really struck a chord with me. And you'll end up spending, you know, 15 or 20 minutes just kind of redressing it. It might've been an episode from many months ago. And that's had a fairly profound impact on my awareness that that sense of community, it's not always completely apparent, but we have developed that. And AI is getting there. It's following other technical communities in that way. And the need was recognized and people are choosing to opt in.
Starting point is 00:58:19 And that's something that's truly gratifying from my standpoint. So my interview skills failed me once again. I asked two questions, allowing you to slip the second question. But my persistence pays off. So I'll ask that one again. Struggles. Doing podcasts is hard. Getting to 100 is hard.
Starting point is 00:58:37 Most people fade. And there's a lot of work involved. What have been some struggles along the way? We know I've had life problems, guest problems, problems you know it's not all roses and yeah and rainbows so what have been some of the struggles and along the way i felt like for quite a while when we were starting when we were reaching out to guests i mean i knew certain people in the community and certain of them were easy to kind of make connections but when you're starting out a podcast and you don't have a long history of things, it's like I feel pretty much like a creeper reaching out to these people on like Twitter or whatever. Like it's a random message.
Starting point is 00:59:14 Like you have no idea who I am. Thankfully, people are mostly gracious. But there was a lot of like no like no response. Like there is that time period where you feel like is this really gonna like pick up I know some people are listening but like you know every every time I reach out to people it's like explaining this thing we're trying to do and like I'm kind of coming out of left field so I think that was like a bit of a hard hard period for me and of course anytime I've learned over time that anytime I listen to myself which I try to do not you know like 100% of
Starting point is 00:59:53 everything I say but I try to listen to what I'm saying I think it was like various things that you do that like make you cringe and then you replace those things like you say all right I'm gonna like work on that and then there's various other things that you just replace that thing with that also make you cringe and so it's like yeah there's a little bit of that too and you know life happens chris and i i think have both had times where i tell chris i'm like i need you to like push this forward for a couple weeks because i am like i'm not going to be able to do it. Thankfully, I don't think either one of us had had one of those times where we both needed it at the same time. But there definitely been times like that where we flip flip flop back and forth. Yeah, I would agree. And we rely on each other. You know, Daniel said that about me. But,
Starting point is 01:00:40 you know, without going into all the specifics, listeners who've listened know we've had some challenges this year that have certainly affected my family, and Daniel stepped right in and just took care of things, and that was good. And you really have to, you know, all of us that are doing this, we have day jobs, so we need the people that we work with to make allowances and accommodate to some degree. We have families who are supportive. We're all doing these podcasts out of our houses and families having to recognize that you need a little time, a little quiet space to get some stuff done. And I know for ours, when we have a house full of dogs and a young daughter, and that can be pretty challenging.
Starting point is 01:01:23 And then frankly, there's a burden that I didn't appreciate back when I was listening to the changelog and GoTime and JS Party before we started this. I don't think I understood the burden of trying to provide a great hour or 45 minutes of content, you know, week after week after week. I've learned to really appreciate that when I listen to other podcasts, that it is hard work to do that. And so I've learned to really appreciate that when I listen to other is I listen to other podcasts that it is hard work to do that And so i've developed as a as still an amateur in this compared to you guys who are the real pros I've really developed an appreciation for the amount of of effort it goes in to At the bottom what what it comes down to is serving your audience and trying to help people
Starting point is 01:02:00 Get what they need out of it and you know They're they're choosing to listen to you for a little while. And they should rightly expect something from that. And so I think that's what I would finish with. There's always the burden for Daniel and me to make sure that we have content that is good for people to listen to. And so for those of you out there listening, please keep coming at us with social media. All the places that we like to engage you. We are listening. We are welcoming your ideas.
Starting point is 01:02:30 You are giving us our ideas for our future shows. And we like to engage you we are listening we are welcoming your ideas you're giving us our our ideas for our future shows and um and we want to serve you and finally after like a year of doing the podcast i got to meet chris in real life and he's a real person so i sort of want to take your word for it we haven't actually met in person before we started the podcast and that happened way later, which was kind of funny. It was actually, I was about a year and it was only in the beginning of January, right before the COVID stuff set out. So most of these episodes of these hundred episodes were done. Daniel and I had never been in the same place at the same time before we became friends through the podcast. We just started our COVID life early. Yeah, we did. But it's worth mentioning. It's remarkable that it is through these episodes
Starting point is 01:03:09 that everyone listens to that Daniel and I have become very, very dear friends. And we had never met until January of this year. So in person. It's amazing. Well, these struggles illustrate why we're here today, which is to celebrate. Yes.
Starting point is 01:03:24 One, Jared and I are both proud of you guys. You guys have done a great job with this show. It takes, as you've said, a lot of work to produce on all sides. So one, congrats on 100 episodes. Two, we're proud of you guys. You guys are doing a great job. Keep doing it. Don't stop.
Starting point is 01:03:41 Yeah. Thanks for believing in us and keeping us on track. It's been awesome, truly. And yeah believing in us and keeping us on track. It's been awesome. Truly. And yeah, and you guys have been right there, even though you're not there from the listener's perspective, you're invisible to them in most episodes. You're actually there every episode. And I get, I cannot count the number of times that Daniel and I have thought, oh, at least we have a great post-production team that can clean up whatever mess we just created during that. And so that's really important.
Starting point is 01:04:09 Yeah, and do cool promotions on everywhere. And yeah, keep the wheels moving. And it's awesome. Yeah, it's a team effort. Totally. Well, I'm not just a producer. I'm a happy listener. And I fall into the category of the AI curious, which we know that practically I listeners kind of fall into a couple of categories. You have the practitioners, then you have the curious and I'm very much in the curious. I've never used any of these things in a useful context.
Starting point is 01:04:36 What was the other word? Useful and value meaningful, meaningful. Yeah. I've never used it in a useful or meaningful possible. Possible is a good one too. That's what I said. Inconceivable.
Starting point is 01:04:45 Yeah. I've definitely used it in some useful or a meaningful context. Oh, possible. Possible is a good one, too. That's what I said. Inconceivable. Yeah, I've definitely used it in some inconceivable fashions. But I know more about this stuff than I ever thought I would just by osmosis of listening along and producing. And I'm a conversational AI enthusiast at this point. I can talk to anybody about it and trick them into thinking that I know what I'm talking about. That's a valuable skill in life. Yeah. Whenever you need a large salary, you can go out and really, really score high on that interview, I think.
Starting point is 01:05:13 I probably could. So what's next for the podcast? What can people expect soon? Where's it headed? Yeah. You know, for me, I was thinking about this as we were coming into this. And one of the things that I really would like to hear more about is how people are incorporating these AI technologies into their lives and into their work. We hear that to some degree, and we know the breadth of this stuff is just going to be pervasive in every industry. And I'm really curious to learn more. I think partly that may be because I spend, you know, I'm in my own job and I spend so much time completely absorbed in trying to get my own work done. But people are using this stuff for really cool, cool applications. And I think I would like to go out there and understand where it's being applied and what are some of the things that I never would have thought about. So I'm pretty excited about that. Yeah. I mean, there's still a lot of gaps that we, that we haven't covered even like today I was talking to a listener who saying, Hey, let's cover something about like drug discovery and pharma with AI sort of thing. Like there's just tons of areas that we
Starting point is 01:06:21 haven't covered yet. So I think like getting some people on from, from there. And I think also, you know, developing more good relationships with the kind of leaders in the field and, you know, hearing their perspective over time is something that you can continue to expect along with like those up and comers. And I think there's really cool stuff as well. That's happening all around the world outside of the U.S. and Europe with AI, both in Southeast Asia and Africa and Latin America. And one of the things that I really have a passion for and would love to see on the show is us continuing to try to try to get those guests who are really innovating in those areas and bringing the benefits of AI to the whole world, really. So one of the best ways to circle back is to invite the listeners. We've mentioned community a lot.
Starting point is 01:07:11 So one easy way, if you're, I'm sure if you listen to this, you may get the outro and occasionally we'll see it in the outro, which is, hey, there's a community. So change.com slash community. But there's a Slack. There's lots of people in there. If you want to share your story, these stories, the ways you're tinkering, the ways you're practicing, etc. in AI, you can do that there. So the invitation is there. Change.com slash community, the Practically I channel.
Starting point is 01:07:36 Hit that up. Daniel, Chris, they're always hanging there. We're in there. All that good stuff. And there's lots of other people there too. Come and hang out. You know, another one of the things that I wanted to mention was I think we're at a point now where it's been commoditized enough to where it's not all about business. There's a lot, you know, going back, you know, in the spirit, Adam, of you talking about, you know, your cul-de-sac, you know, camera identification. I'm serious about that. I believe you. I believe you. There's so many places where we can bring kids into this and do weekend projects. We've talked about some of that at the beginnings of some of our episodes, where you can bring people in, you can take it into your schools as schools are, you know, are opening up as COVID gets under control. to do this in with it. You don't have to be a top scientist. You don't have to be a data scientist
Starting point is 01:08:26 professionally. You can do it at your house. You can, I've heard so many little home projects that people have gotten into over the last year or two. Actually, I would like to highlight some of those. So if you're a listener and you're doing something cool over the weekend, especially if your kids are involved in it, please let us know about that. I'd really like to share those ideas. I have an eight-year-old daughter and you know, it's a perfect time for kids to get into this and it's cheap enough and you can get the equipment cheaply these days so yeah bring your family endeavors. It's not always just about work. It's like you said though it's real life. It's not going to go anywhere.
Starting point is 01:08:59 It's not so much like hey let's teach our 8 year olds because we have to but more like because it's interesting and it's curious. It's fun. It's fun. It's really fun. Who would have thunk it to actually make AI practical to everyday users? I mean, that's the thing, right? I literally want to learn about AI to the point where I can do something to help defend my cul-de-sac from nefarious people who are trying to steal our wheels.
Starting point is 01:09:22 Like, come on. There's got to be an easier way than that, and I have idea and there's tools out there raspberry pi and stuff like that so maybe i can arm myself with an rpi and a camera of course and different detection and maybe my neighbors will let me take their picture and put them into my model who knows but that's that's shooting for the moon there but it could be fun and i have no idea where to start really if you're protecting everyone's tires why not? Right. Exactly.
Starting point is 01:09:46 That's what I'm thinking. It's like Neighborhood Watch only with no people. Exactly. I like it. Well guys thanks so much for sticking with it
Starting point is 01:09:54 having an awesome podcast and thanks for letting us crash. Thanks for interrupting. Yeah. We're happy to join you for this one. Celebrate with you. Gosh.
Starting point is 01:10:01 It was a fun crash. I love that. If you hadn't interrupted who knows what this episode might have been. Yeah. Wait and a fun crash. I love that. If you hadn't interrupted, who knows what this episode might have been? Yeah. Wait and see, I guess. I'm curious.
Starting point is 01:10:09 That's right. All right. Give it up for Daniel and Chris rocking 100 episodes of Practical AI. If you're not a subscriber, check it out at changelog.com slash practical AI
Starting point is 01:10:23 or search Practical AI in your favorite podcast app. You'll find us. And as we mentioned, we launched Changelog++. Make the ads disappear, get closer to the metal and support us directly as a member. Learn more, subscribe, join. Right now it's 40% off as an early adopter. That's you. Check it out, changelog.com slash plus plus. And of course, huge thanks to our partners who get it fastly, Linode and Rollbar. Also, thanks to Breakmaster Cylinder, our beats master in residence. And last but not least, you are invited to join the community. It is totally free. Learn more. Check it out at changelog.com slash community. Join us in Slack and everyone else and say hello. There's no imposters and you are welcome.
Starting point is 01:11:09 That's it for this week. We'll see you next week. Did y'all hear my kids there for a second? I guess something fell out there and my son was like, no. I heard it. I was just chuckling because I'm like, I know you guys got to hear it because I can't stop it. I'm half blind and deaf anyway, so. I dropped the marker though.
Starting point is 01:11:43 I mean, you could totally see in the timeline. Yeah, I heard it. That happens occasionally. It gets edited out. Thankfully, no one gets upset. But, hey, that's how it works. Change a lot, plus, plus.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.