Software Huddle - AI for Developers with Rizel Scarlett, Leandro Margulis and Katherine Miller

Episode Date: December 19, 2023

On today's show, we have quite the lineup. We have Rizel Scarlett, Leandro Margulis and Katherine Miller all joining Sean to talk about AI for developers. This came together because the four of them h...ad participated on a conference panel earlier this year discussing the topic. We discuss our impressions of AI for developers, what impact it may or may not have, privacy and security, ethics concerns, what the future might look like, and a whole lot more. Today’s guests have a diverse set of roles spanning product, marketing, and developer relations, so we think we were able to bring a lot of different perspectives to the topic. Timestamps: 02:25 Introduction 05:45 Will AI's net impact be positive? 11:10 Customer support chatbots 17:18 Speed of Innovation 26:15 Safeguarding Sensitive Data 28:47 Creating your own Models 31:55 Using LLMs responsibly 41:27 Everything GPT 45:08 Existential Risk 51:17 Psychological Safety Links: https://www.youtube.com/watch?v=Wh02qPQasfk

Transcript
Discussion (0)
Starting point is 00:00:00 The maturity of the models are just not there yet to really think about, I'm asking a nuanced question and they really don't understand the question. And so that actually makes it easier and creates potentially more interesting and satisfying opportunities for people because they are doing really, really meaningful and positive work. I do think that sometimes the best technology is the one that is kind of invisible, but it's doing its job. So I could see AI doing a lot of stuff in the background to help us have a better life.
Starting point is 00:00:31 For example, using AI to filter all the spam that we get, you know, from text messages to email. I see a lot of different unique use cases where AI can write in the background and make our lives better. You know, people have been using Stack Overflow for years to do that. Even before that, there was other ways of essentially copying code out of, you know,
Starting point is 00:00:50 even a book 20 years ago that people didn't necessarily understand. Now, when you copy and paste code, you can actually ask questions to the AI to explain what's going on. So you can learn more, whereas you might not want to ask that question, you know, publicly on Stack Overflow, because you don't want to look like an idiot and get attacked or something. I think this is a common thing with technology. Everyone gets excited about like a particular tool and they're all using it. And then after that, we're like, all right, let's get serious. What's actually working, what's not. And then some people find
Starting point is 00:01:22 like the other people find other trends to stick on to. Hey everyone, it's Sean Faulkner. And on today's show, we have quite the lineup. We have Rizal Scarlett, Weandro Migueles, and Katie Weber all joining me to talk about AI for developers. This came together because the four of us had participated on a conference panel earlier this year discussing the topic. And we had a really fun conversation, but we barely scratched the surface. So I thought we should take this conversation to the long form medium of a podcast. And that's kind of how this started. We discuss our impressions of AI for developers, what impact it may or may not have, privacy and security, ethics concerns, what the
Starting point is 00:02:00 future might look like, and a whole lot more. We have a diverse set of rules spanning product, marketing, and developer relations. So I think we were able to bring a lot of different perspectives to the topic. One other thing to note, Katie hosted the original panel. So she actually takes over hosting duties on the podcast for me. A little role reversal. And you'll hear from me as a guest. All right, that's enough preamble. Let's take it over to my conversation about AI for developers with Razelle, Leandro, and Katie. Katie, Razelle, Leandro, welcome to Software Huddle. And for this special edition, I'm going to hand over my hosting duties to Katie Miller, and I'm going to just participate as, I guess, a nice role reversal for me. Maybe slightly more relaxing, or maybe it's more anxiety producing. I don't know. But Katie, over to you. Excellent. Now I have the anxiety of having the hosting duties, but real honor to be here as the
Starting point is 00:02:50 guest host in this edition of Software Huddle. For those that I haven't had a chance to meet yet, I'm Katie Miller, currently working as a developer marketing advisor for companies like Data Protocol, which is a platform that brings developer relations and marketing teams information and support through content and engaging videos. And what are we here to talk about today? So the four of us, and everyone will have a chance to do introductions in a second, we had the good fortune to speak on a panel at the DevRelx Summit barely a month ago. And we got into generative AI. The session, if I'm not mistaken, was what it means to be a developer in the everything GPT era. And it was a session that was really spurred on by this huge boom in generative
Starting point is 00:03:42 AI and all the acronyms and words around it and digging into what it means from a product perspective, a developer marketing perspective, a developer relations perspective. Is this a hype cycle? Is this a positive thing? Is this a concerning thing? We had about 30, 40 minutes and barely scratched the surface. So immediately upon hanging up from that, we all huddled in Slack and we said, how do we do more of this? How do we keep the conversation going? And so that's what we're really here to do today. We essentially had so much fun and so much to talk about that we wanted to keep talking. For those who didn't have a chance to tune into the DevRelx Summit. Some of the things that we touched upon that we might revisit today are how this era of generative AI has changed each of our respective roles,
Starting point is 00:04:34 where our companies are respectively on this spectrum of machine learning, artificial intelligence, generative AI, LLM, GPT, how, because we're developer relations practitioners, how we're balancing innovation and the inherent skepticism and reluctance of developers to adopt new technology, how we even define a developer in this era of generative AI, how does it create more opportunities? So that's kind of where we left off. And we had a lot more to dig into, particularly around ethics, privacy, safety of data, building models, and so forth. So we're going to spend a lot of time getting into that. But before we do that, who are these amazing humans who are on the call. So for today's introduction, if each of you can
Starting point is 00:05:25 share who you are, the company that you're at. And between the last time we talked and now, Slashdata has actually released its 25th State of the Developer Nation survey, which actually dug into globally how folks are feeling about generative AI affecting work as a developer. So the fun introductory question is today, one of the questions was, agree or disagree, AI's net impact on the world will be positive. So Rizal, I'm going to go clockwise on my camera. So why don't you start with your introduction and let us know, agree or disagree? Will AI's net impact be positive? Oh, that's a good question. I'll think about it while I give my intro. My name is Rizal Scarlett and I'm a staff developer advocate at a company called TPD,
Starting point is 00:06:20 which I found out after I joined, does not mean they're still trying to decide on the name it stands for to be decentralized and uh prior to that i was working at github as a developer advocate and that's really where i got interested in like ai's and ai and llms because of github co-pilot and what it can do for all developers productivity and um even even psychological safety for junior developers. In terms of the net impact, I think I procrastinated enough on my intro. I think it'll be positive overall, but I do think that we need to encourage the creation of responsible AI and have people think about ethics so it doesn't go in the wrong direction. Great, thank you.
Starting point is 00:07:06 And I think, Leandro, you are next on the screen. Awesome. Well, thank you for having me. Very nice to be chatting with all of you again. I'm currently a VP of product at Proof. Proof is in identity verification and fraud prevention. Yeah, basically, I'm in charge of everything that has to do with how others' experience prove, including how developers implement our APIs, SDKs, and so on.
Starting point is 00:07:33 I am an optimist in terms of AI. I do think that the impact will be positive. I already see some of those and we can dig deeper into those. The other thing that happens since the last time we talked, in addition to the Slashdata report, is everything that happened with OpenAI's governance, right? So I think that's also interesting to see the level of maturity of the industry and a lot of what we will potentially continue to see as things mature and develop. Absolutely.
Starting point is 00:08:03 I figured at some point we would dance around that without doing too, too much speculating. But I do think that it, and I think it'll come in in terms of the ethics of it and the ownership of it, is you have this really interesting tension and complexity of, are we going too fast? Are we not going fast enough? Who is the tension between ethicists and researchers and businesses and who pulls different strings and kind of the, you know, the, the, the, both the friction and the balance between all of those things. So, um, you, you teed up the conversation perfectly. Um, and Sean, I, I, I think Sean, I think your listeners know you, but still would love to hear But I think
Starting point is 00:09:31 those are important questions to be asking and things to think through. But at the same time, I remain very positive. I think that this could have a massively positive impact on the world the same way that other technology innovations have had, like the internet. Absolutely. I'd have to say that just for my own, I'm somewhere in the middle. And I've also flip-flopped at different points in time. I think when it initially came out, it was definitely a pause, partially from kind of that business strategy perspective where it felt like everyone was jumping on this hype train and it's like, whoa, are we quite there yet? Is there a there there to be developing? Have we thought about at least some world of possibilities in terms of impact before kind of releasing it to the wild? Is it as mature thinking about it from developers, again, who need
Starting point is 00:10:26 to put code into production? Is it stable enough? Is it documented enough? Are the guardrails there enough to be able to be developing responsibly and so forth? And so that was really my initial reaction was one of real inherent skepticism. I think over time, I've really come to see that where the technology is right now really is an empowering tool just to bring efficiency. And ultimately, that's what we're all striving for within the what we build, which is we want to simplify, automate, abstract away the mundane as much as possible so that we have time to do that meaningful, impactful work. And so I do see that it allows us to be more productive without necessarily replacing jobs. And please, no replacement of call centers
Starting point is 00:11:19 at this point. Chatbots are just not there yet. If they cannot understand me, like swearing into the phone, I need to talk to a human being. We are not ready to replace call centers with chatbots. Apologies to the US Postal Service, because I did in fact, like, try them. I actually think that, you know, I think I have a different view on the call center and customer supports because staying on hold for 45 minutes or an hour and a half to finally reach somebody at United Airlines customer support is not a great experience for anybody either. agent that actually might have better recall and better understanding of what you can do to rebook a flight on United than the actual human that I might actually end up talking to after waiting 90 minutes. I feel like I opened up Pandora's box here, Leandro.
Starting point is 00:12:17 Very quickly, I do think that AI can help in customer support chatbots, but it's all about empowerment and depends on the... It reflects the organization that things are put on because I would gladly talk to a human and prefer to talk to a human if the human is empowered to help me. If the human is not empowered to help me, I think it's a waste of time for that person to be on the other side of the line.
Starting point is 00:12:44 And it would be good to have some filters that can help me, I think it's a waste of time for that person to be on the other side of the line. And it would be good to have some filters that can help me from AI perspective in some kind of a compensation process, right? I mean, I don't know, my luggage got damaged, my luggage got lost, right? I mean, it's okay. You get like, you know, this amount of credit for a future flight, whatever that is. And I think I can easily solve that with an AI and then, you know, scale, like escalate as needed to a human that can curate and do a couple other things as long as the human is empowered, which sometimes at the moment they're not. So I might as well just talk to a bot. Rizal, I feel like we just went into a really interesting topic. So go for it. Well, I was going to come in with a little bit more fire,
Starting point is 00:13:25 but then Leandro's point made me be like, okay, I kind of see your point. Katie, I agree with you. I do not want to talk to Bob. Anytime I'm calling a service is because it's like a very difficult problem. Like maybe, like the other day I was calling because for some reason i couldn't
Starting point is 00:13:48 log into what's the thing for your stocks something like fidelity one of those things and it's like oh your username and password's not working even though like that's my username and password i worked with a bot for like an hour it was so annoying i'm like i just want to talk to a person but that's how I feel. If I'm calling, it's already an elevated problem. But I see Leandra's point. If it's something basic that I don't need to wait 45 minutes for a 10-minute problem, I'll talk to a bot.
Starting point is 00:14:19 But usually I'm calling because it's a real problem. So I don't really want that. And then I'm also like, I don't want to replace jobs. Like a lot of customers support people. I don't know if like companies will be like, well, we have all these bots. Let's like get rid of the actual people doing these jobs. So I don't want to forget about those folks either. And that's where I really see.
Starting point is 00:14:42 And I kind of see this as a potentially net positive thing, which is if we want people to be empowered in their jobs and to be able to be focused on the most interesting and complex problems, we absolutely want to simplify and automate away. My luggage got lost. What do I do? I lost a package in the mail. What are the steps that I need to take? I cannot log into an account. Those sorts of things with prompts, what it is is when it gets to those more complex scenarios is where I found the interactions right now, the maturity of the models are just not there yet to really think about, I'm asking a nuanced question, and they really don't understand the question. And so that actually makes it easier and creates potentially more interesting and satisfying
Starting point is 00:15:36 opportunities for people, because they are doing really, really meaningful and positive work. So perhaps we're all a little bit more aligned on this than we think we are. I don't know, Sean, now that we've kind of come back around to that, are you aligned or do you still kind of disagree? Are you still kind of pro-team customer service all the way? No, I mean, I think that to Leandro's point, that a lot of this is, especially at the state that AI is in today, even from like, and we'll get into this, like, you know, engineering co-pilot standpoint,
Starting point is 00:16:15 it's really about empowering people to do their job better. So if you can offload a lot of the kind of, I don't, you know, problems that don't necessarily require a human's level of nuance and understanding through a well-put-together bot, then you can rely on people to handle
Starting point is 00:16:34 more interesting, complicated use cases. And they're actually creating more value as a person at that point. And then also, you can also pair the person, like the customer support person with an internal agent that they're relying on. That way, you're taking advantage of the AI system that may have really good recall and is able to essentially have perfect recall on the manual for whatever the business is. And then pair that with a person where they can be the human in the loop to make
Starting point is 00:17:03 sure that the response makes sense and that the person's getting good, good quality service. And then they can, you know, 5x their scale beyond what they're able to do today. And then that cuts my wait time from, you know, 45 minutes down to five minutes. Absolutely. And that actually brings us back to Leandra, what you had kind of teed up a little bit, which is there was a big news cycle and we still don't 100% know what happened. I don't think this is probably the right forum to speculate on that. But one of the things that did come out is that interesting tension between are we moving too fast or are we moving too slow? And as we are all practitioners that are really about enabling and empowering developers, it's kind of taking that lens of how to move at you know, when people are really, really worried about jobs, I think it's are the chatbots that are being built ready? And I think all of us kind of collectively agree, maybe the models aren't quite there yet. And so are companies trying to move too quickly?
Starting point is 00:18:18 Are developers, are the tools that developers have to build really ready to kind of go to that extreme direction? And how quickly or slowly should we be moving in this space? And I think that gets back to the, are we all kind of on the hype train or is there really a there there right now? So I'm curious who wants to start with that question. Leandro, you want to go? Go for it. I wanted to quickly react to that. And look, now that I'm a father having an 11 month old daughter, I'm kind of thinking about about growth and innovation in terms of biology as well. Like, I mean, what I see in my daughter few days and then suddenly, well, like, I mean, literally the next morning she's like taller and bigger and, you know, her software got upgraded and she has new skills.
Starting point is 00:19:11 So it's amazing. But the same in terms of biological biology research. Right. I mean, I think innovation happens in jumps, leaps and bounds. And what we're seeing right now is certain jumps that happen, certain leaps, and then everything else moving around it, right? So I do see sometimes some false starts, like start, stop, right? And I think we're getting to what the right pace should be, but there's always going to be innovation happening in jumps, and we need to be okay with that and then react accordingly. And I do think that we're, similar to what we've seen in some other type of research, we see things happening in leaps and then regulation governance getting built around it.
Starting point is 00:20:04 Historically, what we've seen is sometimes innovation being much faster and technology being much, much faster than regulation. I think that things are moving faster in both ways, but I think it's helping keeping things in check. Great. So, Leandro, thank you for your reflection and perspective on that. And again, as someone who has children who are respectively 10 and 12 years older than yours, it's a really great parallel to look at biological development as well as technological development. Because I remember the expression when my kids were babies, which is they'll do something and then the next day, they'll make a liar out of you because you're like, they slept through the night. Just kidding.
Starting point is 00:20:40 They like eating spinach. Just kidding. And while they definitely still go through those phases and growth and leaps, it definitely tempers out over time as well as they become more sentient, functional human beings and not little beautiful baby blobs. Would anybody else like to jump in and comment on that topic? Oh, so I'm not a parent, so I can't make any analogies. I still feel like I'm a kid, but the pace we're moving at technologically, I would agree with Leandro. Like it makes sense at the pace, but I think my concern sometimes is like, is our industry just like moving at this pace for money? Like, what's the motivation behind why all these companies are like adding AI and LLMs into their products? Is it just because VCs are like, hey, we're going to give you money. So like, okay, let's hurry up and like randomly add AI in here. That's the part where I have take issue with, but I think it's okay if it's just from like an innovation standpoint and they're just, we're just like experimenting and being like, how can we make things better?
Starting point is 00:21:54 And maybe sometimes things will break. And like, like Leandro says, sometimes we'll be like at like a plateau a little bit and then sometimes we'll skyrocket, but it's more the motivation behind why we're moving so fast is what like brings me a little concern. Yeah. And don't worry. We're all kids at heart. I literally just spent six days at amusement parks last week. So, and laughed and screamed with the same joy as my children on roller coasters. So it's what is life, if not joy? Sean, and you? So, I mean, I think it's hard to temper the speed of technology innovation. But I also think that in some ways, I think the companies that are sort of leading the charge in the world of AI, OpenAI and also other people that are building foundation models and stuff, they are moving quite quickly. But I think actually the industry is not necessarily moving that fast in terms of
Starting point is 00:22:52 adoption of a lot of these tools. There's a lot of interest and there's a lot of hype and there's a lot of people tacking on AI to their homepage or to the name of their company. But I haven't seen that many big product launches outside of sort of the core AI companies. So I think, especially in the enterprise space, there's a lot of interest, but not necessarily that many projects
Starting point is 00:23:15 that are fully going live. And I think part of that is some of the challenges that we've talked about or we've touched on here that we'll probably go into more detail is around ethics and responsible AI and privacy and security. Like there's some real big challenges. And also from a technology standpoint, like we don't need to have really good ways of testing whether one model is
Starting point is 00:23:36 better than another model and are, you know, how can we, you know, progress as a company or, or, or investment. So I think a lot of this stuff that we're seeing is more like using these as tools to help speed up content creation, maybe using tools like GitHub Copilot or other code assistants to help speed up engineering. And those are fantastic, but it's more of this kind of like human in the loop, like making us be able to do more work efficiently than necessarily like, like huge impactful, like this is displacing everybody. It's not like introducing autonomous driving, you know, 18 wheelers right now that would remove jobs from like truck drivers,
Starting point is 00:24:20 we haven't reached, I don't think we've reached that point. A lot of stuff hasn't been sort of fully adopted and fully realized in the enterprise, even though there's a ton of interest. Sean, I actually really appreciate the perspective that you shared that even though, and even combining that with what results shared, which is it seems like there's so much investment and financial investment and focus being put into this, that when it actually comes to adoption, there is some judiciousness to it, which really means in terms of thinking about building meaningful customer experiences, we're not necessarily the actual usage of it from a like, building products with AI in it is actually trending in a very potentially responsible way. But a lot of the investment right now up front
Starting point is 00:25:13 is really in that research and development side. And I think from a data standpoint, like from my just speaking from my sort of day job and lens of around privacy, I think we're in a much healthier place when it comes to thinking about the challenges of data privacy and personal privacy than we were 20 years ago at the beginning of the internet, or even 10 to 15 years ago at the beginning of social networks. So I think we're asking a lot of the right questions. And even from a regulation standpoint, there's a lot of like there's the EU AI Act that is being pushed through right now. President Biden's executive order talked about a lot of things about AI and privacy. So there is a lot of stuff that we're seeing like proactively being done right now at the government level, which is way better than we have been historically. So I think that's a good sign that we're at least asking the right questions and trying to make the right decisions.
Starting point is 00:26:15 Great. And that actually so beautifully leads into the next question, which is really exploring safeguarding sensitive data, which is related, but a little bit of a different topic, particularly around personally identifiable information, intellectual property, things like that. It's really, really exciting to be able to synthesize all of this information and to output things. I think each of us could
Starting point is 00:26:45 really point to a period of time in which these tools really helped us do things much more quickly, efficiently, meaningfully. And at the same time, a lot of the information that's out there is connected to individual human beings. And so what does that mean for the data that goes into these models? I'm curious if any of you want to jump in and tackle that question. Sure, I'm happy to jump in and get us started. So, I mean, I think the big challenge that people are trying to navigate when it comes to generative AI, or really any sort of AI model right now is that if we want to leverage customer data, employee data, you know, internal documents that might contain intellectual property, what does that mean in
Starting point is 00:27:33 terms of using that as training material? Like how do we, you know, in the world of GDPR and things like the right to be forgotten, like how do we comply with someone coming and saying, hey, I want you to delete my information from your system but if we use their information to train an lm there's not really a delete button for an lm like the models are designed to learn not unlearn so that is a fundamental problem there's some early research in the space of being able to uh delete or remove certain information from an lm through a series of fine tuning, which is something that we've talked about previously on the show, but it's none of that stuff is is production ready. So this is, I think, one of the big challenges that is limiting companies to move beyond sort
Starting point is 00:28:17 of prototype and demo ware to actual production, because they don't really have a good solution to this, this challenge. And when you start to think about, you know, developing this stuff globally, then you need to be also thinking about like, how do I deal with data sovereignty or locality laws? How do I keep customer information bound within the regions of particular country to comply with those regulations while running a global LLM? And these are hard problems to solve. And not everybody, most people don't have an answer to these types of questions. Yeah. And I think one of the questions, and Leandro, I'll get to you in just a second,
Starting point is 00:28:52 just as a kind of preview, I think one of the things we were going to talk about is different models is also just do you create your own models? Do you use existing models, kind of pay for data, things like that? And also having the size of the model that's going to be meaningful. And if you are really limited in what you can pull in to make sure that you're really protecting people's information, what does that mean for the efficacy and power of the model as well? Leandro, I'd love to throw it over to you now. Perfect. It was a great. Thank you for thinking it up so nicely.
Starting point is 00:29:26 That's where I was going to go as well. The reality is that developing an LLM is very resource intensive and expensive. So the question is, do you really need to develop an LLM? Or perhaps you can develop an LLM with public information that only reads and interprets the internal documentation that you need. Right. And you run that privately within the enterprise. Right. That's one of the things that I've seen companies experimenting with. Right. I think we also need to be realistic and people are going to leverage like we touched on this a little bit on the previous conversation, right?
Starting point is 00:30:06 Public versus private. People are going to use public LLMs. And, you know, there should be some guardrails in terms of how you use it, how you prompt it to avoid any IP issues or PII oversharing, right? That's one thing there. And those guidelines can be both from just a guideline perspective for people to follow, but also they could be enforced or curated or there can be software that can help redact stuff as well to make sure that you get a high-quality prompt for what you need but without giving away too much. And then the other thing is leveraging other LLMs privately
Starting point is 00:30:45 that they can interpret your data, but they don't consume it for learning. They just look at it and interpret it. You need a good point. I don't think... Most people aren't going to be building foundation models. Most people are going to be augmenting existing foundation models
Starting point is 00:30:59 by using RAG or maybe some form of fine-tuning. But the notion of private LLM, I think it does solve potentially some form of fine tuning. But the notion of private LM, I think it does solve potentially some types of problems. But it gives you but really, what it gives you is model isolation, which might be good enough for whatever your use cases, but it's a little bit like setting up a security parameter around your infrastructure. Once somebody sort of within that they have, they have full access. So I think the limitation around private LLM is that it doesn't really solve the problem of who sees what, when, where. I can't essentially govern access to the responses of the model differently based on who you are.
Starting point is 00:31:37 So a customer support person should probably see a different type of response or have a different level of access than somebody in marketing or maybe the CEO of the company. And I think that is the central challenge that companies training things on customer data or employee data don't really have a great solution for. And I had just out of curiosity dug into terms and conditions of even using the consumer facing versions of some of these. And I think you both alluded to this, depending on how the model is set up and the tool is set up, the prompts, it's generative, like putting in those prompts. And so the question that I have for all of you, and maybe Razelle, I'll have you start with this one,
Starting point is 00:32:21 is what responsibility do we have as marketers, as advocates, as product owners in terms of not just what apps we build, but how we educate our developers, how we educate our customers to make sure that they're using it responsibly, because it was not necessarily super obvious that if you put a name in that that name, even though it's like one in a gazillion, that is not an accurate number, but one in a very, very large number of data points, it now becomes a thing that is part of that data. And so again, kind of putting back on our respective marketing product advocacy hats, how are you all thinking about the resources and materials and documentation
Starting point is 00:33:14 and transparency and discoverability of that to make sure that people are building the tools responsibly and using the tools responsibly. So, Razelle, curious if you have thoughts on that. Yeah, I actually think that's, like, top priority, like just educating people how to build and how to use those responsibly. I think tools like GitHub Copilot, I think this year launched the Copilot Trust Center. I don't work there anymore, but I thought that was helpful because it's like videos, documentation, blog posts, et cetera, just different resources explaining how your code gets used.
Starting point is 00:34:02 If you're, depending on what license you're using like if you're using a co-pilot for individuals or for business and i think that was put out as a reaction and like a hindsight type of thing but i think like going forward any company any product any developer advocate they need to put that as their foremost thing. As you're telling people, hey, here's the cool thing that ChatGPT or this LLM can do, also put a disclaimer in there. But please don't add in your personal information. Please don't add in things that your job is doing that are secretive. I think that's a responsibility of all developer advocates because our job is not to sell the product or our job is more like,
Starting point is 00:34:54 I don't know about product managers, product marketers, but our job as developer advocates is to empower developers. And I don't think it's very empowering if we're like, oh yeah, just use this, but we don't tell them about like the pitfalls or any of the drawbacks. And I feel like I try to do that as much as possible with GitHub Copilot, where I saw people like questioning, how does this thing work? Like, what's really happening with my code? Like, I try to address those as much as possible. And I think other advocates should make that their top priority and other companies as well. Yeah, you actually reminded me of what it's like to learn how to drive a car,
Starting point is 00:35:31 where yes, you learn how to accelerate, you learn how to put on the brakes. If you're old school like me, you actually learn how to shift a car, you know, you learn those things, which are the mechanics of making something move, but you're not allowed to get your license purely because you understand the mechanics. You need to understand the rules of traffic. You need to understand safety. You need to understand speed limits and things like that. And so I think it's something similar here that it's not just learning how to drive the car, but it's learning how to drive the car safely, which I think is really good. And as a developer marketer, and I'll put it over to Sean in a second, I think if developer marketing is done correctly,
Starting point is 00:36:12 we're really just amplifying the work that advocates are doing and making sure that it's discoverable and that it's targeted and everything like that. But it's really led from that same place of empathy and empowerment. So I would actually agree a lot with that, that to lead with a message, it's not just about the promise of what's possible, but also the promise of what's possible and that you'll be able to do it in a really, really responsible way, ideally. And Sean is the other recovering marketer on the call. I'm curious how you see this. Yeah, I mean, I think that there's not that much difference between sort of modern approaches to marketing and what Rizal spoke about in terms of the responsibilities of a developer advocate to empower developers. It's marketing is really about building relationships at scale. And the only way to build relationships with people is to
Starting point is 00:37:11 give and to take and to be essentially honest and transparent about you know what you can or cannot do or provide to somebody like essentially, you uh i don't know smoke screen market people but and convince them to buy your product but if they're you sell them you know a bill of goods that's not the reality they're not they're going to churn as a customer anyway so you know what value is that so you might as well be up front about the what you can and can't do and i think that that is um is very, very important when we're talking about these AI systems. People are going to figure it out if you're lying to them.
Starting point is 00:37:53 And that's not going to be a great way to build a relationship. And it's probably going to lead to, I don't think, great business outcomes in the long run. I'd like to quickly add to that. Katie, I really like your analogy about driving a car. And one of the phrases that I live by is making the implicit explicit. I think that you were all touching on this, but the reality is that context matters, right? Putting the AI tool in context, putting driving in context and understanding the consequences that what you're doing in terms of your actions while driving or what you're building, understanding that context matters are going to help people use their common
Starting point is 00:38:30 sense to decide how they want to use things. There's also the issue like Rizal, as you were talking with Copilot or using AI for code, you know, the code that is built with an aid of an AI, who owns that, right? I mean, there's some IP, interesting IP challenges there and, you know, things can be structured in slightly different ways, right? I mean, there's some interesting IP challenges there and things can be structured in slightly different ways, right? So I think that's another field for opportunities there. Yeah, and I think just in terms of the context, I think that's really important to consider in terms of how careful you need to be
Starting point is 00:39:04 using these systems as well like if if your ai system is i don't know to assist somebody um to generate i know a better more compelling tweet like i don't know the consequences of that not you know we uh an error happening with that or it you know a hallucination happening it's fairly you know like inconsequential but if you're using uh an lm to diagnose somebody of a disease or you're using ai to autonomously drive the vehicle the the bar for um precision and error rate is different it's much much higher than it's higher than what we would expect of a human, I think, as well. So the context really matters in terms of what you're actually producing and how much sort of due diligence you need to do around the quality of the output. So what kind of started this conversation?
Starting point is 00:39:59 And I recognize our time is going to come to a close soon. I'm going to give everyone kind of time for any parting reflections and thoughts. But what really spurred on this topic was that very Portlandia-esque feeling that, you know, starting in kind of February, March last year, it really felt like everybody was just putting a GPT on it. And it was that sense of, is there something there? And I think what we've really kind of all alluded to is we're very much in that fast-moving innovation phase that while there are actually some regulations and guardrails being discussed along with it, which is moving much faster than technological innovation in the past, it is in many ways,
Starting point is 00:40:45 kind of in that research and development phase and that, especially at the enterprise level, what it actually means, especially when you are getting into private data, personally identifiable by a personally identifiable information, IP and so forth, isn't necessarily moving quite as quickly as the research and development that companies are doing. But, you know, we're coming up to the close of this particular calendar year. It's amazing to
Starting point is 00:41:14 think that we only have, oh my gosh, I have to do the math now. Is it like 33, 34 days left in this calendar year? Just curious, I want you all to look into your crystal balls and tea leaves. Do you think that kind of that everything GPT in terms of what people are talking about at events, how they're presenting demos, how they're naming things, how they're naming their companies and products? Do you see that momentum continuing? Or do you think that we're, Leandro, to your point, like, are we kind of getting to like a leveling out phase in terms of that biological or technological development that it will still be moving fast, but maybe not as fast as it was over the last 12 months? Or are we still going to see like everything GPT for the next, you know, foreseeable future?
Starting point is 00:42:04 I can quickly react to that. And I do think that it's like, like the second that you said, right? I mean, I think it will still move fast, but it will not be everything GPT on it. Got it. Razal, what are you thinking? Yeah, I just wanted to second that. I think that I think this is a common thing with technology. Everyone gets excited about like a particular tool and they're all using it. And then after that, we're like, all right, let's get serious. What's actually working, what's not. And then some people find like the other people find other trends to stick on to. So I do think it'll start to level out. We'll still move fast, but we'll start to make more fortified models and tools. Great. And Sean, you want to bring us home on that question? So I think that we're still a little bit in the hype train of generative AI.
Starting point is 00:42:54 We're still in the pets.com bubble a little bit. And I think we still have some time before we get to the next stage, which is the trough of disillusionment. And then finally, the real work starts. So I think we're probably, you know, maybe a year, although things are moving so fast that maybe I'm overestimating the timeline here. But I think we're still, you know, six to 12 months away from getting to a point where the real work starts. And we go beyond just people thinking of sort of the baseline LLM use case of, hey, we're going to add a chatbot into this thing. I think we're going to see a lot more real work that comes out of this in the near future, where people are really thinking
Starting point is 00:43:38 from an AI-first mindset of how do we like, like, reinvent this type of product. So in terms of closing thoughts, again, we've covered between the DevRelx session, which, you know, we can we can hopefully link to in this post when it goes out. So if anybody missed that, they can definitely go take a listen. It's on YouTube. And then this one, we covered a huge amount of ground, really what it means, what this era of everything GPT means in terms of how developers approach their work as developers, doing their day-to-day job, as well as what they're really being compelled to build from a business strategy perspective, talked about ethics, we've talked about privacy, we've looked at it through the lens of each of our particular roles, and been able to have both breadth and depth in that conversation. And so as we conclude, this is actually a question that I love to ask job candidates, is, is there anything that I didn't ask that you're like, that's the thing that I definitely wanted to make sure to share and hit home're like, that's the thing that I definitely wanted
Starting point is 00:44:45 to make sure to share and hit home with? So that's the concluding question that I'm going to leave you all with today. And last time I went Rizal, Leandro, Sean. So this time we can go to Sean, Leandro, Rizal. So Sean, is there anything I didn't ask that you're like, this is the point that I definitely want to make sure hits home in this everything GPT conversation. I would love to get people's take on, I guess, whether we feel there's a real existential risk to the future of AI. I talked to Bob Muglia recently about his book, who's the former CEO of Snowflake, a long
Starting point is 00:45:28 term time executive at Microsoft. And he's predicting essentially artificial general intelligence by 2030, which is coming up fast. And we could be wrong about these types of things, but at some point potentially in our lifetime, we could reach
Starting point is 00:45:44 superintelligence. And what does that essentially mean for humanity? And I think these are kind of like important questions. I'm not, like I said, at the beginning, I'm an optimist. So I'm not a doomsdayer. But I'm curious to hear what people's kind of thoughts are on the potential existential risk of AI and potentially robotics. That's a big one. I'll bring that one home, but on to Leandro. Yeah, that one is a loaded one. Well, I hope we see more of a positive future versus Skynet, right? And that's what I see on that one. But we can expand.
Starting point is 00:46:18 We can have a whole other episode on that. But going back to Katie's question on putting a product hat inside a product, in terms of use cases, I do think that sometimes the best technology is the one that is kind of invisible but is doing its job. So I could see AI doing a lot of stuff in the background to help us have a better life. For example, using AI to filter all the spam that we get, you know, from text messages to email. I see a lot of different unique use cases where AI can run in the background and make our lives better. And Rizal. Yeah, go for it. I like that idea, Leandro.
Starting point is 00:46:57 Okay. I am a big fan of AI, but the idea of artificial general intelligence scares me just a little bit. I do like using AI for small things like what Leandro said, or I really like, I know maybe people are not happy with the stage that we're at, but I like the stage that we're at where AI is just helping people to code or helping them to think of like, oh, how can I reframe this blog post or give me ideas or something like that? I really feel like I like the stage that AI is at where it's like an ideas tool, a brainstorming tool, something that gets you from one point to the next.
Starting point is 00:47:38 But I don't really want to like leverage AI to the point that it's doing too much. I don't know how else to pour my thoughts here. And I think that what you may feel is wishy-washiness is kind of how I feel as well. And that's how I was going to bring it home. And I think the question that I had asked in the last one is, are we heading more towards Vonnegut's Piano? Are we heading more towards the Mandalorian's Plaza 15 planet? Which is, is it going to be something that humans essentially become non-essential? Or is it going to be that it actually allows us to live our best lives because we actually can live these robust, healthy lives of leisure.
Starting point is 00:48:26 And I think as I listened back last time, we were all kind of somewhere in the middle, but the thing that really struck me and Rizal, this is perhaps what you were talking about is the benevolent uses of it or the uses that may sound really, really positive, but can very quickly become stereotypical and biased. And this was that idea of it's really nice if you're trying to create a tweet, but if you're really trying to do something much larger can become really damaging if there isn't that human, like actual human overlay and investment in it. And especially in this era of misinformation and so forth, like how may governments and politicians and so forth actually manipulate information, which is something that does give me a little bit of pause of how do we really cut through the noise and understand what is real and what is not real. At the same time, though, you know, it's really interesting. I just
Starting point is 00:49:39 got back from a week in Disney World and went to Epcot Center for the first time since 1995. And when I was there in 1995, it still had the different exhibits of, you know, that were made in like the 1960s that predicted what the future would be like in the 1980s. And, you know, it was very hopeful and optimistic. And it's, you know, and it was like, we would be able to live underwater, and we would be able to talk to each other through screens and things like that. And some of those things have come to pass, some of them are possible, but we haven't necessarily prioritized them. So I also like to think if Epcot were to redo it and say, where are we now and what do we predict that we'll be able to do in the next 20 to 30 years? And how does technology allow us to do those things?
Starting point is 00:50:33 There's also a really idealistic and hopeful lens that we can take, that we can look to it as well in the role that all of these things can play. So I think that if there weren't the guide rails and the ethical conversations and the regulations happening, I would have more heebie-jeebies than I do right now. Um, but again, I'm gonna, I'm gonna go with a little bit more hopefulness and idealism. And, uh, so that's kind of what, what, what we'll leave with. Any closing thoughts, folks? This was a very fun conversation, but want to make sure that we all got our ideas out there. So anything else folks want to end with? Or are we good? I want to end with something very quickly. I do think that a lot of these AI tools do provide, and I say this all the time, they provide psychological safety for especially like folks who are more junior in their roles. And I think when people hear me say this, they like assume automatically that I'm junior.
Starting point is 00:51:36 It's not that I'm junior, but I do often think about that experience I had as a junior developer, especially since I was like the only black girl on my team, it was like super scary to like ask for help and, or super scary to just like discuss certain ideas I had. And I think using tools like ChatGPT, GitHub Copilot, all the other like LLMs or LLM powered tools out there is like, it provides a superpower for folks so that they can be like, all right, cool. I'm going in the right direction. I've been able to like, just verify and validate that this idea makes sense. And now I can feel more comfortable going towards a coworker or something and being like, hey, here's how I want to write this test. Here's how I want to structure this database or something like that. Because you were able to talk back and
Starting point is 00:52:29 forth with something that's not necessarily going to judge you. So I will hope to see that part of the use of AI just continue to expand. And I know a lot of people are like, oh, it's not great for juniors because that reduces their intelligence, not intelligence, but their experience or the knowledge that they'll build up with code. But I strongly disagree with that because I think if they're intentional and they learn how to use these tools well, like if someone teaches them and they have the fundamental knowledge of coding, they can use that to just continue to like build up on their, their coding knowledge. That's all I have. Great. Thank you. That, that is a really, that actually brings things together with a really pretty bow because again, the spirit of this was how we're all thinking about it through our developer program lens, marketing, advocacy, and product, and how we're really thinking about our end users experience, our customers experience, our developers experience.
Starting point is 00:53:36 And that really brings it home with a really thoughtful, empathetic, and hopeful lens. So thank you so, so much. Sean, I'm actually going to throw it back over to you to bring us home with this episode of the podcast. I have to say thank you so much for the opportunity to guest host. I hope I held the reins well. But if there's anything that you want to leave your listeners with, I'll let you wrap it up. So back over to you. Yeah, thanks so much. Well, thank you, Rizal, Leandro, Katie for doing this. And also, Katie, you did a fantastic job. You know, I think I was kind of hoping that you would do, you know, maybe make it look harder. So that it would make me look better. But I might have to just give up my hosting duties to you full time at this point. But yeah, thanks so much. And I think,
Starting point is 00:54:30 Razelle, I really, thanks for chiming in on that point at the end. I think that's a great, great example of the value of some of these systems early on is because copying and pasting code from various sources is not something new. People have been using Stack Overflow for years to do that. Even before that, there was other ways of essentially copying code out of even a book 20 years ago that people didn't necessarily understand. Now, when you copy and paste code, you can actually ask questions to the AI to explain what's going on. So you can learn more, whereas you might not want to ask that question publicly on Stack Overflow because you don't want to look like an idiot and get attacked or something. So I think the psychological safety angle is a really, really great point to make. And I think it's a wonderful
Starting point is 00:55:16 use case for AI, even beyond coding, having essentially a place where you can ask any type of question that you want without being able to feel judged. Yes, agreed. Thank you so much, everybody for joining and cheers. Awesome. Thank you. Thank you for having us.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.