The Changelog: Software Development, Open Source - Microsoft is all-in on AI: Part 1 (Interview)

Episode Date: May 30, 2024

Scott Guthrie joins the show this week from Microsoft Build 2024 to discuss Microsoft being all-in on AI. From Copilot, to Azure AI and Prompty, to their developer first focus, leading GitHub, VS Code... being the long bet that paid off, to the future of a doctor's bedside manner assisted with AI. Microsoft is all-in on AI and Build 2024's discussions and announcements proves it.

Transcript
Discussion (0)
Starting point is 00:00:00 What's up, welcome back. This is the Change Log. We feature the hackers, the leaders, and the innovators of software and, of course, AI. Today, Jared and I are on location at Microsoft Build 2024 in Seattle, Washington. And we're talking to Scott Guthrie, Executive Vice President of Microsoft. And Microsoft is all in on AI. The entire Build Conference was all about how artificial intelligence, co-pilot, and everything was being reinvented by AI. But what a treat it was to talk to Scott Guthrie,
Starting point is 00:01:02 such a cool, iconic guy in our industry. And he even stuck around for about 20 minutes just chatting with me and Jared after the mics were off. A massive thank you to our friends and our partners at Fly.io. That's the home of changelog.com. Launch your apps, launch your databases, and launch your AI on fly.io. All right, Homelab friends out there. I know that you run cron jobs constantly inside your home lab, doing different things. Who the heck knows what you do with them?
Starting point is 00:01:46 I know what I do with mine. And I love to use Cronitor to monitor my crons. And it's just amazing. Cronitor began with a simple mission to build the monitoring tools that we all need as developers. 10 years later, they've never forgotten the magic of building and they honor the true hacker spirit with a simple flat price for the essential monitoring you need at home.
Starting point is 00:02:09 So I've been working closely with Chronitor. Shane over there is amazing. They have an amazing team. They love software developers. And I was like, you know what? I would love it if you can do a Homelab price because I don't want to pay a lot of money for monitoring my chron jobs. I just don't want to pay $20 or $ for monitoring my cron jobs. I just don't want to pay 20 bucks or 30 bucks or some crazy number for my home lab. It's just my home lab,
Starting point is 00:02:30 right? But what I can tell you is they have a free hacker version of chronitor, five monitors, email, slack alerts, basic status page, anything you need on that front. And then you can bump it up if you have bigger needs. So if you have a lot of cron jobs behind the scenes inside your home lab, you can bump up to the home lab plan. 10 bucks a month, you get 30 cron jobs and website monitors, 5 alert integrations, 12 months of data retention, and just so much. So much if you really want it. I love Cronitor. I use it every single day to monitor my cron jobs. Everything I do inside my Homelab has some sort
Starting point is 00:03:05 of cron monitoring, managing, updating, and I use Cronitor to monitor it all. And it's amazing. Go to cronitor.io slash Homelab and learn more. Again, they have a free plan that you can ride or die or the Homelab plan if you want to bump it up and you have more needs for 10 bucks a month. Once again, cronitor.io slash homelab. Well, we're here with the, I think, legendary Scott Guthrie in his legendary red shirt. Fresh off a keynote. Day two at Build. You just finished your keynote. Very good.
Starting point is 00:03:59 Curious. I mean, you're so cool, calm, and collected up there. I'm sure that's from years of experience. Do you still get any nerves at all when you do these keynotes? Because there's thousands, is it thousands of people? I think it's probably a few thousand. So that's a lot of people listening to every word. Any nerves at all or is it just like old hat now?
Starting point is 00:04:13 I still get, I don't necessarily get nerves about being on stage. I think the biggest thing that, you know, one of the things that I like to try to do is live demos in my keynotes. And so with live demos, there's always a little bit of, you know, hope the network works. I hope nothing goes wrong. Nothing goes wrong. And, you know, it is live. And so sometimes people do click the wrong things or accidentally close a window. And so, you know, that's the, that's the only part in the keynote where I sometimes hold my breath a little bit. Right. And, um, you know, thankfully today that both the live demos were super
Starting point is 00:04:46 smooth and all the speakers were awesome. It's fun. A little bit of adrenaline makes it fun. If it was no stress, then it might be a little boring. You don't want to be too comfortable up there. One thing I noticed about these keynotes is that it's all
Starting point is 00:05:01 very orchestrated. I would be nervous not having a clicker to just advance to the next slide. Like, you know, someone's in charge of that. So it's a teamwork thing, but then you're relying on somebody else to like transition at the right time and you don't want to talk to. So those are the things that would make me nervous up there. I'm sure it's always a concern for everybody. Yeah. It's only in the last year that I've given up my clicker. It's always been comforting to have in my hand. But when you have animated slides that show architecture, getting your talk track with the clicks synchronized,
Starting point is 00:05:32 sometimes I get so excited I'll talk and then I'll forget to have clicked. And so in the last, early the last year, I think, I've kind of done the trust fall and I now have someone backstage that clicks next on the slides for me. And that does make it a little easier for me, but it means I can kind of focus a little bit more on what I'm talking about. It looks more polished, too. Like, the execution looks a lot smoother, like as if it's, like Jared said, orchestrated. A well-oiled machine.
Starting point is 00:05:58 Yeah, very polished. The question I think, though, having been through this is day two, is the word AI being mentioned pretty much every slide. Are you excited about keeping to say AI? Are you kind of tired of saying AI? What is your personal perspective on AI at this moment of having to say it so much? I like saying it. Yeah? Okay.
Starting point is 00:06:24 I think both because for a both because, you know, for a lot of developers, it's still somewhat new. And I think also for the world and the industry at large, you know, we're still fully comprehending all the use cases for how we can use AI. And I think, you know, it's a bit like when the internet first came out or, you know, mobile phones, you know, suddenly became connected and we had the iPhone moment. People are just kind of still in that, could we do this phase? And I think that's sort of exciting because it is very much a platform shift. And we're still in the early innings of kind of even understanding the art of the possible.
Starting point is 00:07:02 And then everyone's trying to figure out how to make the possible happen. Well, even in all the, thus far at least, it's been more of skill stacking where Azure has matured. And you're obviously still advancing the platform. It is the supercomputer for AI for the world, as you say in your keynote. But all the discussion has been around integrations of AI,
Starting point is 00:07:21 how to essentially give this superpower to all of Azure, all of Copilot, all of the Copilot agents that you can build yourself, which I think is different than the last builds we've been to. We've been to one build before this, right here in this very city. And I think even then it was around loving GitHub. I think the acquisition was still fresh. The last time I was here was it 2018, 2019? Something like that. Ancient history. Yeah, almost ancient history basically. But now it's about the integration of AI.
Starting point is 00:07:48 So you have to be excited because that is what the next layer of functionality is on top of the supercomputer, is the AI that you can not only leverage yourself as a platform, but give those feature sets to others to leverage as well. Yeah, I know. It's been a great ride with GitHub.
Starting point is 00:08:04 And one of the things we talked about when we acquired GitHub was recognizing the responsibility we had, which is GitHub is a very loved developer tool. It is the home of open source. Wearing the shirt, too. The hoodie is your GitHub hoodie.
Starting point is 00:08:19 I got my sweatshirt on. And we recognize when we acquired GitHub that there's a lot of responsibility. It is open source. It is open platform. It needs to support multiple clouds. And I think we've tried to be good stewards,
Starting point is 00:08:38 similar to what we've done with VS Code and similar to what we've done with.NET, of recognizing that we're going to maintain the openness, we're going to maintain the open source, and we're going to also do good integrations, both in terms of across our products, but have it be an open platform. And so even today, or this week,
Starting point is 00:08:58 we talked about the open ecosystem around plugins to get a co-pilot. And so we announced a whole bunch of integrations with I think 20 other platform companies and tools vendors to make sure that we also just keep honoring that spirit of making sure that GitHub is truly open and truly developer first in terms of the methodology.
Starting point is 00:09:23 For AI, one of the things that we showed today that I'm also excited about is, how do you use GitHub with AI? And obviously, some of that is for the developer with GitHub Copilot. But I think also the new support for what we call Prompti, which is an open source library that we're embracing with our Azure AI platform,
Starting point is 00:09:42 so that you can basically check in artifacts into your source code, which basically describes your prompt that integrates with the model that can be versioned, that can be source controlled, that can be unit tested and is very developer friendly. And, you know, that's a change even from our last build a year ago, where we showed very much using our Azure AI tools to do all the development, which were incredibly powerful. But for a developer, it's like, well, how do I write code?
Starting point is 00:10:11 How do I version control? How do I do CICD? And part of what we showed this year is, how do you still use those great Azure AI capabilities, but also how do you have a very developer-first, developer-focused experience, which ultimately has text files, source code. Right.
Starting point is 00:10:27 You know, again, source control integration and CICD native. Raw materials. Promptly, it's a file format, right? Yeah, it's a YAML-based file format. And, you know, basically it allows you to kind of instantiate a connection to AI and to a
Starting point is 00:10:46 language model. And, you know, if you use an LLM, typically you're, you're providing prompts to the LLM. I mean, you certainly can concatenate that as a giant string as a developer,
Starting point is 00:10:58 but the nice thing about the prompty file is it is a readable, it's readable. And again, it's, you could check it into source control and you can write unit tests against it. And again, you can check it into source control. Yeah, exactly. And you can write unit tests against it. And so what we're trying to do is kind of provide... Primitives. Primitives, exactly, in an open source way.
Starting point is 00:11:13 And so it is an open source library. It's nothing exclusive to us. But you're seeing us sort of integrate GitHub with PromptD, with Visual Studio Code, and then making it work first class with our Azure AI platform. And I think that the combination is very powerful for developers and very natural for developers.
Starting point is 00:11:30 Yeah. I mean, you guys are trailblazing for sure. There's so much to figure out here. And certainly, if Azure can figure it out really, really well, it makes Azure such an attractive platform for people to build on. You showed a lot of people that are using it,
Starting point is 00:11:45 30,000 companies I think was referenced that were starting to do these things to build their own co-pilots and stuff. And then I was out in the lunch area during lunch talking to a couple people like, hey, are you using this stuff? And everybody says no. Because I just feel like there's almost one percenters
Starting point is 00:12:01 at this point of the trailblazers and the really, really bleeding edge people. And then the rest of us common devs are just like, no, I'm not doing anything. What do you say to the 99%, just making up that number, the people who haven't adopted anything? There's so many questions,
Starting point is 00:12:16 so many potential pitfalls, don't know where to get started, et cetera. What do you say to those folks about how they can bridge that gap? I think one of the things that's interesting about AI, and I think this is similar to kind of, again, going back to whether it's the internet or whether it's the iPhone revolution, is it has entered the zeitgeist of people. And so even though there's lots of developers that haven't built an AI application, I think it'd be hard to find a developer on the planet
Starting point is 00:12:45 who hasn't tried ChatGPT, who hasn't tried some generative AI app. And increasingly, we're seeing, you know, just millions of developers that are using GitHub Copilot as part of their day-to-day activity. So I think generative AI is something that we're all, most developers now are using in some way, shape, or form, or certainly at least tried.
Starting point is 00:13:04 And to your point, I think where we are is still in the early innings. It's sort of 12 to 18 months, or 18 months since ChatGPT was unveiled. And so, you know, people are still trying to figure out like, okay, how do I actually pivot to be an AI first app? Or how do I incorporate AI into my app? And part of what we try to do with Build is not just talk about the science and the art of the possible, but actually, you know, from a practitioner perspective, here's how you do it. And, you know, the first demo or one of the first demos in my keynote was, you know, showing how you build a customer facing chat experience into an existing website, or it could be a mobile app. And, you know, things like our Copilot Studio are really designed so that you can actually safely and securely build your first app,
Starting point is 00:13:51 integrate it into an existing app, and not have to be a data scientist, not have to understand groundedness. You know, Copilot Studio isn't going to be the tool for every scenario. Right. But if you're looking to kind of build 80% of solutions or 90% of solutions, it's probably good enough out of the box. And then what's nice is it has that no-cliffs extensibility so that you can call any API. And that was sort of the second demo that we did
Starting point is 00:14:18 that was live in the Azure AI section where, you know, we used Python, we used PromptD, we basically grounded a model, could use vector databases, et cetera. And, you know, I'd probably encourage people start, build a simple app using Copilot Studio, and then as a developer, keep going. And, you know, hopefully what we're showing with each of the events that we're doing each conference, it is becoming easier and easier. And ultimately, I do think in the next two years, I think pretty much every organization on the planet is probably going to have a custom AI app that they built. And similar to other platform waves like the web or like smartphones, everyone has one and everyone has built an app for it. And
Starting point is 00:15:08 I think that that's what makes being a developer so exciting is there's always these types of platform waves happening and it's an opportunity to keep learning new tools and become even more valuable. Ride the wave. How far in to the Microsoft stack or the co-pilot stack do you have to be in order to start dipping your toe in the water? Like if you're just a Python dev who maybe uses open source stuff generally, do you have to be like all in on Microsoft to try some of the stuff with you guys? Or how does that work? Not at all, really. I mean, if you look at the demos we showed in the keynote, it's Python with VS Code.
Starting point is 00:15:43 You can run it on a Mac, you can run it on Linux, you can run it on Windows using GitHub. That was the core editing experience. And we did log into an Azure AI subscription, but it could be with any browser or any platform. And you didn't have to be very deep at all in terms of Azure in order to kind of do everything that we showed. And what's nice also about Azure is we have world-class Kubernetes support. And you saw that with our AKS automatic support that we talked about as well in the keynote.
Starting point is 00:16:17 So it's really easy to stand up a Kubernetes cluster. It's really easy to deploy a web app, whether it's Python, Java, Ruby, Node, et cetera. It runs on Linux. So if you're familiar with Linux, great. Our PaaS services work with Linux. We got great Postgres support. We got great MySQL support. We got great Redis support. So it's not, you know, we try to make sure that when you approach Azure, you're not kind of having to learn a different OS or a different tool chain or a different language. And, you know, many of the same kind of core building blocks like Kubernetes, like Postgres, like Linux, like MySQL, we just provide as a service. And, you know, as a result, it should be very easy to approach. And even with our Azure AI platform, we obviously have our OpenAI models in there, which are incredibly powerful and very unique to Azure.
Starting point is 00:17:12 But, you know, we have Mistral, we have Llama, we have Cohere, we have, you know, 1,600 other AI models in our catalog. And we do provide our models not only as models, but also as a service. So you could just say, stand up a Lama or a Mistral model endpoint. Right, hit an API. And hit an API. And you don't even have to think about managing or operating the backend.
Starting point is 00:17:34 That sounds nice, because I always think about that as being some sort of huge pain in the butt. I haven't actually done it yet, because I'd like to just hit the API. But that sounds like a lot of work. Certainly there's people who like that work and are good at that work.
Starting point is 00:17:44 But there's those of us who work and are good at that work. But there's those of us who aren't. Just a side note, how instrumental has VS Code become? I remember hearing the story of how VS Code began inside of Microsoft years ago. 2017. Yeah, we did a show on that.
Starting point is 00:17:57 It had meager beginnings, as many things do. But wow, the success of it, and now it is kind of the foot in the door to a lot of what you guys are doing. That's just pretty amazing, isn't it? It's been a fun ride. I mean, it's- Did you see that coming?
Starting point is 00:18:12 It's like long bets paid off. It's long bets, yeah. Seriously. And I would say there are certain things that we've done where I had high expectations, but VS Code is sort of taken far... All the cake. 10x higher than my highest expectation.
Starting point is 00:18:28 Yeah, seriously. What a success. And I think with developers, developers don't like to be marketed to. You've got to earn developer trust and earn developer love. And I do think with VS Code, the team really embraced that ethos
Starting point is 00:18:44 of really being opinionated and really focusing on that developer love. And candidly, without the VS Code project, I don't think we could have done the GitHub acquisition. In some ways, it was the GitHub team looking at VS Code. And at the time, GitHub even had their own editor called Adam, if you remember. Oh, yeah. And I remember the GitHub CEO saying, like, I use VS Code.
Starting point is 00:19:12 And he's like, I love it. And that was a key part of even sort of showing Microsoft is very much a developer-focused company. And a lot of people don't remember, and Satya mentioned it yesterday, 50 years ago, we were formed as a developer tools company. Our very first product was a developer tool,
Starting point is 00:19:32 which was Microsoft Basic. Not Visual Basic, not Quick Basic, but actually the original Basic. And so that developer tools ethos and that developer focus has very much been from the very beginning of the company yeah that's so interesting i would say that vs code definitely was was instrumental in
Starting point is 00:19:51 changing my mind about microsoft because i when i go back to my youth i was very much like an evil empire guy no offense but was like anti and i could see from our perspective, the change in attitude writ large at Microsoft towards the open source world. And it made sense with Azure and everything you guys are doing. In retrospect, it makes total sense, but it was still kind of like side-eyeing everything. And then VS Code was like, no, this is legit.
Starting point is 00:20:20 Very impressive, great tool. And now it's your foot in the door to all of this new AI functionality, which otherwise is very unapproachable, I think. And so just a cool story and such a success. We've been seeing the thing run inside of VS Code to all the integrations, like seeing it run in the tool
Starting point is 00:20:40 is just, it's accessible to anybody. Anybody can go and install it. It's just like one install away, basically, one service away. So the black box of AI is a lot more accessible to common devs these days, I think, with VS Code for sure. And I like the idea of a long bet paid off.
Starting point is 00:20:58 I'm sure somewhere along the line, it was like, yes, this is a good idea, not this is the best idea ever. And over time, it's become the best idea ever. That's when those download numbers took off. That's when they're like, okay, let's double down on this thing. Right. And it's, you know, you kind of,
Starting point is 00:21:14 yeah, I think there's lots of companies, you know, Microsoft included in the early days, which would sort of say they loved open source, they would sponsor events, you know, they'd put out marketing. But, you know, again, if you really want to prove your open source credentials, would sponsor events. They'd put out marketing. But again, if you really want to prove your open source credentials, you kind of need to show it and do it. It's not about telling. It's about doing. And I think a lot of kudos even to the VS Code team. I mean, in the
Starting point is 00:21:38 early days, there was sort of maniacal focus on performance, responsiveness, and really focusing on being an amazing code editor. There were lots of, you know, we had obviously Visual Studio ID, which we still love. And, you know, it had lots of designers and kind of a gooey at times first attitude. Yeah. And, you know, the VS Code team, you know, basically said, no, like, we're only going to put things in VS Code that are really focused on Code Optimized Editor. And we're not going to lose that ethos. We're not going to create a project system. We're really going to be optimized around a very opinionated perspective.
Starting point is 00:22:18 And I think a lot of kudos. That's partly why it's such a loved tool is at its core, it's still a very lean, efficient, fast-performing system. And you can opt in to add extensions, but we don't come with like a thousand extensions out of the box. And that's going to be true as well with AI. But I think even today showing the prompty support, the fact that you can download the file, you do get IntelliSense, you do get colorization,
Starting point is 00:22:44 you can run your prompt locally now and set breakpoints inside VS Code and debug it. And it works with every model. It's not just, you know, we showed OpenAI today, but it'll work with Mistral, it'll work with Lama. And, you know, the prompty library is open source. And so, you know, just like having, again, that integration, you know, I'm hoping that we actually,
Starting point is 00:23:09 you know, similar to the approach we took with ES Code is really speak to developers and really build something that developers love and want to use. It's not about marketing. It's not about keynote demos. At the end of the day, it'll be about, are we driving real usage because it solves a problem
Starting point is 00:23:26 that developers need a solution for. I couldn't help but think about Clicky or Clippy though. Clicky. Clippy. Oh, yeah. You know, Pompty. It's a little different.
Starting point is 00:23:39 It's definitely a nod. They both end in Y. That's true. It's a nod of sorts I can imagine. Somewhere along the way, the name resonated. What's up, friends? This episode is brought to you by our friends at Neon.
Starting point is 00:24:14 Managed serverless Postgres is exciting. We're excited. We think it's the future. And I'm here with Nikita Shamganov, co-founder and CEO of Neon. So, Nikita, what is it like to be building the future? Well, I have a flurry of feelings about it. Coming from the fact that I have been at it for a while, there's more confidence in terms of what the North Star is. And there is a lot more excitement
Starting point is 00:24:35 because I truly believe that this is what's going to be the future. And that future needs to be built. And it's very exciting to build the future. And I think this is an opportunity for this moment in time. We have just the technology for it and the urgency is required to be able to seize on that opportunity. So we're obviously pretty excited about Neon and Postgres and Managed Postgres and Serverless Postgres and data branching and all the fun stuff. And it's one
Starting point is 00:25:05 thing to be building for the future. And it's another to actually have the response from the community. What's been going on? What's the reaction like? We are lately onboarding close to 2,500 databases a day. That's more than one database a minute of somebody in the world coming to Nian either directly or through the help of our partners. And they're able to experience what it feels like to program against a database that looks like a URL and then program against a database that can support branching and be like a good buddy for you in the software development lifecycle. So that's exciting. And while that's exciting, the urgency at Neon currently is unparalleled.
Starting point is 00:25:44 There you go. If you want to experience the future, go to neon.tech. On-demand scalability, bottomless storage, database branching, everything you want for the Postgres of the future. Once again, neon.tech. What's up, friends? Got a question for you. How do you choose which internet service provider to use?
Starting point is 00:26:05 I think the sad thing is that most of us, almost all of us really, have very little choice. Because ISPs operate like monopolies in the regions they serve. I've got one choice in my town. They then use this monopoly power to take advantage of customers. They do data caps. They have streaming throttles. And the list just goes on. But worst of all, many internet ISPs log your internet activity and they sell that data on to other big tech companies or worse, to advertisers. And so to prevent ISPs from seeing
Starting point is 00:26:38 my internet activity, I tried out ExpressVPN on a few devices, and now I use it to protect that internet activity from going off to the bad guys. So what is ExpressVPN? It's a simple app for your computer or your smartphone that encrypts all your network traffic and tunnels it through a secure VPN server. So your ISP cannot see any of your activity. Just think about how much of your life is on the internet, right? Like sadly, everything we do as devs and technologists, you watch a video on YouTube, you send a message to a friend, you go on to X slash Twitter or the dreaded LinkedIn or whatever you're doing out there. This all gets tracked by the ISPs and other tech giants who then sell your information for profit.
Starting point is 00:27:21 And that's the reason why I recommend you trying out ExpressVPN as one of the best ways to hide your online activity from your ISP. You just download the app. You tap one button on your device and you're protected. It's kind of simple, really. And ExpressVPN does all of this without slowing down your connection. That's why it's rated the number one VPN service by CNET. So do yourself a favor.
Starting point is 00:27:44 Stop handing over your personal data to ISPs and other tech giants who mine your activity and sell it off to whomever. Protect yourself with a VPN that I trust to keep myself private. Visit ExpressVPN.com slash changelog. That's E-X-P-R-E-S-S-V-P-N.com slash changelog. And you get three extra months free by using our link. Again, go to expressvpn.com slash changelog. One thing you said early on in your keynote, you said every app will be reinvented with AI. And you also said the data is the fuel to enable this.
Starting point is 00:28:40 Can you talk about this also having this world's most loved developer tooling, GitHub, VS Code, we've been talking about that. How do you think in the next maybe year, I guess, to next build, will AI be reinventing applications? How will we be doing that? Is it just the agents? Is it co-pilot? What are some of your thoughts on that? I think the thing that you're going to see over the next year or two even is I think you're going to continue to see kind of the AI use cases inside applications evolve. We have lots of scenarios, and we're doing it at Microsoft, where you have kind of a co-pilot experience inside your existing tool. And I think there's going to be an awful lot of that over the next year. And that's a very logical way that you can start to integrate AI
Starting point is 00:29:25 conversation scenarios with natural language into existing workflows, into existing applications that you already have, whether they're web, mobile, client. I do think you're going to start to see a point, and it's probably starting now, but I think you'll see it even more of the next year or two, where the model will invert a little bit, where instead of starting in an existing environment, and there's sort of this co-pilot on the side, you know, the co-pilot becomes the primary environment. And maybe you still go to the other application for some scenarios, but more and more, you're going to be able to use natural language for more and more of the tasks. And I think consumers and users are going to start to expect that. And even if you look at, say, the Copilot demos that we showed a year ago,
Starting point is 00:30:13 the ones we showed six months ago, and then the GitHub Copilot workspace scenarios we showed today, you're starting to see that evolution where instead of, I started my code, I highlight something, I ask it for something in Copilot, you know, you can start to now ask multi-turn style scenarios in natural language and drive that experience. And I think you're going to start to see this application pattern evolve. And as the models get richer, as developers get more comfortable with building these types of AI applications, I think that's going to be one of the big shifts that you'll see. And I think that's, again, not too different from when the web came out or when the iPhone came out. Often these apps started very simple.
Starting point is 00:30:58 The website started very simple. And then as people got comfortable with it, they became richer and richer and richer. And the paradigm shifted from let's replicate what was previously done in Win32 into a web app to, you know, let's actually have an optimized web experience. Same with the iPhone. Let's not just shrink our website. Let's actually think about a native mobile app. And I think you'll see that same evolution with AI as well, where you start to see more and more native AI apps. Yeah. It shifted too. The iPhone with application design, more and more designers went essentially mobile phone first. They began there with their designs, their initial footprint of how an application would work. They expanded to different viewports, obviously. It sounds like
Starting point is 00:31:38 this AI world will, when it reinvents the applications, is not so much just the application, but the way we interface with it. I think we mentioned with Mark yesterday, essentially, how this single pane of glass, this single prompt can sort of be the interface to some degree. And you're saying natural language. I imagine at some point, potentially voice will become ubiquitous. It's already kind of somewhat there. But this interface is no longer, let me go to different panes of windows. Let me go to different things and do things. It's more like, I want to stand up a new cluster and I just want to describe, I need three nodes. I want Kubernetes and I want this stack on there. Like maybe a natural language processing prompt could be simply give me that versus me having to be a dev literally clicking buttons on I want these three nodes
Starting point is 00:32:25 and going and manually doing all these tasks. Is that what you mean by the reinvention of applications is the way we interface and act with them? Yeah, I think as the AI gets richer, you're going to see a lot of these scenarios become more like an agent. So to your point, instead of the dev having to click eight things or type five things in four different files, you can sort of effectively ask the AI, you know, change this number in my website to be highlighted more. And that might then update your CSS. It might update the text. It might update the HTML. And, you know, again, what's nice with GitHub is that, you know, you have source control, you have diffing tools. You know, it's what's nice with GitHub is that, you know, you have source control.
Starting point is 00:33:05 You have diffing tools. You know, it's very natural to be able to see, okay, what were all the changes that just happened? And as a developer, you can review it. You can revert it. You can make different changes. You can commit it. And so I think that model with GitHub and source control works super well with these types of kind of agent-based activities. I think the other thing, and we showed us a little bit in one of the demos today, is you're also going to start to see the paradigm
Starting point is 00:33:31 for AI shift from being synchronous, meaning you type something into, whether it's ChatGPT or a co-pilot, get a response, to more asynchronous, where you can sort of say, okay, I want you to go work on a problem and get back to me. And that get back to me might be immediately, but it might also be five minutes from now, or it could be tomorrow. And as we start to think about activities that we do as developers, hey, cost optimize this,
Starting point is 00:33:59 or come up with recommendations for how I could cost optimize this app that's running in my cloud, it might need to look at the usage for a day or two. It might need to go examine the bill. It might need to hit a couple of different systems. Us being comfortable about having an agent that can go off and do that and then get back to us starts to open up a lot of activities and a lot of scenarios that I think will really change how we work. And it ultimately, you know, if we're successful, give every end user back a whole bunch of time. Because if you think back to what you did the last day and wrote down what you did minute by minute and looked at that log, there's an awful lot of busy work that we all have to do in our lives that hopefully AI will help automate more of
Starting point is 00:34:49 and allow us to actually focus on the things that we enjoy and ultimately give us productivity that improves our lives and the businesses we work for. Is there anything in sci-fi that you point to, like Jarvis or Howl 9000? Is there anything out there in sci-fi that you're like, this is..., you know, how 9,000, is there anything out there in sci-fi that you're like, this is, hopefully it ends up differently than how 9,000. Well, that may be a bad example, but Hey, you can still be like, we like portions of
Starting point is 00:35:15 that. Uh, or even, uh, in Wally, I think Wally was, you know, how 9,000 esque. Right. And it was Sigourney Weaver's voice as the as the voice for the the ai robot that was over this stuff like is there any sci-fi that you personally lean on as just a nerd it's like you know what if we could be more like jarvis jarvis is kind of like hey go analyze you know from iron man go analyze this for for whatever and get back to me that kind of thing is almost what jarvis did except for Jarvis was non-visual.
Starting point is 00:35:46 It was simply a voice. I think back to the Isaac Asimov books that I read as a kid. There's one where I can't remember the name of it, but it was you have the detective who's the detective pair, I think. One of them is a robot
Starting point is 00:36:01 and the other one is a human. I think that ends well. And I do think this ability where we can, again, it's not about replacing, it's about how do we augment our experience and enhance it. I feel like it's enhancement, yeah, to me. For now. Yeah. Well, I think, you know, even if you look at the world today, you know, take the last two or three years where we've had higher inflation. Right. In lots of places, it's very difficult to hire workers because, you know, there's parts of the world where people are retiring. There's more people retiring every year than entering the workforce.
Starting point is 00:36:39 Right. And so I do think the world right now, thankfully, is desperate for productivity. Yeah. think the world right now, thankfully, is desperate for productivity. And if you look at demographic trends, we're going to need more productivity every year because we will have more people retire every year than enter the workforce, probably from this point on in most developed countries. And so some productivity will help us in terms of improved quality of living. And we got to do it responsibly. And I think that was also one of the reasons why in the keynote we showed so much around safety and responsibility. We don't want this to be like HAL 9000. We do want to make sure that every developer thinks of safety and security from the moment they start a project.
Starting point is 00:37:19 It's not after the fact. It's not after you've had an issue. It's like, no, we need to really design this up front because there is bad stuff that you can do with AI as well. And, you know, that's partly why we're kind of building it into our platform and tools and really trying to make sure we raise the consciousness of thinking about this, you know, in the same way that we thought about SQL injection attacks
Starting point is 00:37:40 way back in the day. These are just things that developers need to kind of think about and guard against as they build great applications. It's the kind of stuff that you can't just latch on top after everything's finished. You have to actually build that in at the foundation. It makes total sense that you guys are doing that.
Starting point is 00:37:56 I think as the platform operator, owner, runner, president, I guess is the correct word, you probably geek out with what people are building on your platform, right? Like, of course, you could look at numbers like users and revenue and stuff, but like what you're enabling people to build
Starting point is 00:38:12 probably is exciting to you. We're at the very beginning, like I said earlier, it's burgeoning. A lot of people haven't built stuff, but you showed a lot of companies doing cool stuff. Is there anything in particular or a few examples of like people
Starting point is 00:38:24 who are leveraging, you know, Azure as an AI platform to build cool stuff. Is there anything in particular or a few examples of people who are leveraging Azure as an AI platform to build cool stuff right now that you can share with us? Maybe plant a seed of inspiration for folks. Yeah, there's many, many scenarios. I mean, I think we talked about 50,000 companies using Azure AI already in production today. And so it's always dangerous to pick one
Starting point is 00:38:44 because then you upset 49,999 others. All right. Good disclaimer. Now go ahead. I think some of the stuff I get most inspired by is in healthcare. Okay. Partly because the healthcare industry has just gone through a super difficult time with COVID. You know, the physician burnout is at an all-time high. This is an industry where there are fewer doctors leaving every year than entering the workforce. And doctors and nurses are just tired. And they've gone through a lot the last several years. And unfortunately, in parts of the world, including here in the US, the demands in terms of documentation are very high.
Starting point is 00:39:29 And so part of what doctors don't like is that they like medicine, but spending two hours a night writing up your case notes from the people that you saw during the day. Right. They don't like that. They'd much rather have dinner with their families and unwind. Yeah. And, you know, if we can really add productivity to their lives and take that drudgery away, they have much more fulfilling experiences. And, you know, we're doing some work ourselves with Nuance, which is a company we acquired two years ago. And it's with clinical documentation. And, you know, now you can basically just put, you know, the doctor has a cell phone, ask the patient, is it okay to
Starting point is 00:40:10 record this conversation for my notes? Pushes a button, and then the doctor can have a conversation with the patient, look them in the eye, have empathy, not have to take any notes. And at the end, it'll automatically create not a transcript, but actually a summary that you can save in the EHR of the complete visit. And the doctor can review it, edit, save. And we literally get love notes from doctors who are just like, this has transformed my life. I now see my family. I leave at the end of the day and I don't have work I'm taking home. And we're also, and we showed it in the keynote, working with Epic, who is the leading healthcare provider system. And in the keynote, we showed scenarios where if you use MyChart as an example, in the US,
Starting point is 00:40:58 lots of people probably are familiar with MyChart. A lot of health systems expose it directly to patients. And it's a way you can message your doctor. And it's great. At the same time, sometimes doctors then have to respond. And if all their patients are sending them hundreds of mails a day, that is work. And what MyChart now does with the built-in co-pilot support they've done is they can now draft responses for the doctor. And it adds more empathy it helps bring in a lot of the details it helps the doctor understand uh and potentially understand things they might have missed based on the medical records and based on what the patient's saying
Starting point is 00:41:36 and um you know it's again a great example of leading to bet much better health care outcomes much much better patient experience but at the end of the day making better healthcare outcomes, much better patient experience, but at the end of the day, making physicians so much more productive, happier, and more engaged with their jobs. And that gets me kind of excited because I've literally had doctors cry in front of me as they're describing how it's changed their life.
Starting point is 00:42:03 And that feels good. Cry in a good way. Well, you give somebody two hours of their day back, five days a week, maybe more. And that's serious. Yeah. Is that right? Good math. Two times five is?
Starting point is 00:42:15 I mean, that's real time. Like you said, it's not arbitrary. At the same time, you also have this recorded record, too, which is kind of like CYA in some cases with the patient relationship and the doctor relationship. There's some version of the thing that happened that's like, this is a source of truth. This was what was said. This was what was discussed. So there's probably some liability concerns that get diminished as a result of that too. And then obviously you have to have the opt-in for the patient, but we experienced something just like that. When we do our podcasts, we record in Riverside. At the end of it we get a summary
Starting point is 00:42:47 All the show notes, keywords in it And we don't copy and paste We use it to save ourselves time This is what was said It's a reminder It's pretty accurate So we're the doctors in the podcast world Enjoying something similar
Starting point is 00:43:01 Because that's like a patient visit You sit down, we can be present At the end it's transcribed Wecribed. We use what we can from it. And we don't have to sit back and say, what did we actually talk to Scott about? Well, it tells us for the most part what we did. And we're just using that to build our summaries and to do our intros or whatever it might be. And if my doctor's doing a version of that in the medical field, that's awesome. Yep. And my example of an internal use case, it's not a product, but it's something that we've built. And I think for everyone that's in a DevOps world is going to ultimately use a tool like this, whether it's something that we ultimately
Starting point is 00:43:35 productize or someone else does, is, you know, every time we have an incident, like an outage or, you know, something that requires getting a team of engineers to work an issue, we create an ICM ticket. And we have a Teams room. And so the engineers will log in, it's audio and chat, and they work a case or work an issue. And because, especially in the early stages of an issue, there's a lot of audio traffic on that Teams room. And because, you know, especially in the early stages of an issue, you know, there's a lot of audio traffic on that Teams room. And inevitably, as we bring in more engineers, you know, they'll say, you know, can you bring me up to speed on what has been discussed? And sometimes people take notes, but they're often terse because people are working the issue. And, you know, over and over you hear, you know, people describing a summary of the thing to the person, and then someone else joins, and then you describe it again,
Starting point is 00:44:26 and someone else joins, you describe it again. And I call it the fog of war, where people are trying to understand what's going on. They're bringing in experts. I think every company that works in a DevOps world has probably experienced something like that. We now use the Teams API to basically take the audio in real time. And we now use the Azure OpenAI service to provide a real-time summary of the telemetry from our systems. And we have pretty good telemetry, plus the Teams chat, which is text, but importantly, then the audio. And it's time-synced? And it's all time-synced. And it's basically summarized in every 90 seconds or so. We update, here's the summary of where
Starting point is 00:45:10 we're at. And so we can basically tell how many subscriptions are impacted, which resources, what dependency graph, is there a root issue that's causing this subservice to be impacted? And what's the summary of the bridge conversation? And it's amazing in three bullets how we can actually summarize the issue. And if you'd asked a human to write that down, they'd never be that accurate. And they would never be in real time. And there'd be someone who's just a scribe who's having to type that up. Right. And, you know, as someone who, you know, gets notified every time there's an incident, it's been great for me
Starting point is 00:45:48 because otherwise I'd join the bridge and listen. And now I can just on my phone actually, you know, keep an eye on it and decide whether I need to join or not. Right. Because I'm always getting kind of an update information. And that kind of ability to kind of fix the fog of war and actually help, you know,
Starting point is 00:46:05 take stress out of our engineers lives, certainly take stress out of my life too, but also, you know, more importantly means we actually solve issues faster. And again, you know, that goes back to the burnout too.
Starting point is 00:46:17 Like you're not burning out because you're in the minutia of what doesn't matter to you. You're actually effectively using your time, being with the people you need to be to do your job well for Microsoft. That's how you avoid burnout. It's much less stressful. Focus on the things that matter. Well, it's even a job that nobody really wants. I mean, we talk about who's going to take meeting notes
Starting point is 00:46:34 or whatever. It's either dictated by the boss, like, you're going to do it. Okay, it's my job. I'll do it. Or it's like a volunteer thing, like, hey, who wants to take me? I'm pretty good at it. I'll do it. Sure. Yeah, always that same person who happens to be more amiable than the rest and like they don't really want to be doing it so like this is work that nobody should have to be doing and they're going to do it like you said probably a little bit worse because you have to be fastidious you can't get
Starting point is 00:46:55 distracted right then you can't be solving the problem with everybody else you're just taking the notes so i mean there's there's so many small wins there that add up just really cool absolutely and it's it's fascinating if you if you have someone who's a scribe who's actually listening to a conversation, the typical human only kind of understands about 93% of the conversation. Somewhere around there, the stat. If you listen to this podcast and then ask someone to write a summary of it, they wouldn't actually be 100% accurate. They would actually probably be low nineties. Even if you ask them to listen to a paragraph and write it out. Just because we bring our own stuff or why?
Starting point is 00:47:34 Well, it's, there's a lot, you know, we're talking fast, we're going back and forth. There's context that we assume that the person listening might not. And people bring their own biases or their own what have you. And so it's interesting also with some of the AI, once you can get to that kind of, I'll call it human comprehension level, the experience of the end user using that AI application is completely different.
Starting point is 00:47:59 So that physician, once the notes look like they're more accurate than what the doctor who just sat through the conversation recalls like whoa that's amazing because the physician wouldn't have been able to remember those you know three or four paragraphs of notes they would have maybe done one or two paragraphs and maybe they would have forgotten something or maybe not picked up the nuance and now we're actually starting to see these models, even with audio, be able to kind of comprehend at times even higher than what a typical human would if they were listening to the conversation and taking notes themselves. Yeah.
Starting point is 00:48:32 But you can hear intent in the voice, too. The way you speak, the diction, the speed, the emotion is a version of intent that the AI can eventually become more and more skilled at so that the nuance can be connected. Whereas the doctor may be like, I want to be in the moment. And then after the fact, they're just sort of the verify part of it. They're the human verification process that says, OK, this was accurate. This is what I would agree with. And then maybe in some cases, it enhances what the doctor may have or may not have prescribed
Starting point is 00:49:00 as a part of the conversation. And it helps the doctor actually focus on the patient. I think the other thing that we're hearing from doctors is often they were typing furiously in their laptop. And so they're staring at their screen as opposed to the patient. And as a result, they're missing the nuance, the pause when someone answers, are you having any issues? Or how often does this happen? If you're not looking and watching them, you might say, well, think back again. back again, is it, is it really happening every day or is it happening a couple times a day? And, you know, several times we've heard from physicians of, if I, if I stare at my
Starting point is 00:49:34 patient, I see more. And then also the patient feels more comfortable sharing. It's, you know, it's, you don't feel comfortable if someone's furiously typing away at a laptop, you know, necessarily sharing everything that you might have an issue with. And so it leads to better outcomes and a much more emotive experience to the patient, which ultimately builds the trust and ultimately makes them feel more connected to their physician and ultimately leads to better healthcare outcomes. I think that does lead me to the conversation, which I guess we've been avoiding thus far because we've had it with a few other people, but it's important is this safeguarding around hallucinating. Because if the record is the AI summary
Starting point is 00:50:15 and you don't even have the original text and you're just going to rely on that and maybe the doctor doesn't do their job of reviewing it, they just throw it in the database and then three weeks later you pull it out and it's like, this one bit was wildly wrong. Or sometimes it's nuancedly wrong, but let's just say the case is wildly wrong. That could cause some serious problem in somebody's life, right?
Starting point is 00:50:33 We have histories of people who amputate the wrong leg, for instance. It'd be very easy to have the wrong word there. Or the wrong person in surgery. Yeah, all kinds of stuff that goes wrong. So, I mean, humans, we make mistakes all the time. For sure. But these things that we're building, we don't want them to be as bad as we are. We want them to be better than us. So I know you guys are doing stuff in this regard.
Starting point is 00:50:57 A lot of it seems like kind of like packaging and safeguarding and testing the black box. But beyond things that you're currently working on, which is the prompt shielding, what's the grounding techniques? The groundedness. Yeah. These other things that are like verifying and constraining, checking inputs, checking outputs. Is there some way of eventually or maybe even soon, I don't know what you all are working on, like fixing the root problem of the hallucination in the first place? Yeah. It's a great point. And going back to even the previous conversation we had of, you know, even humans listening
Starting point is 00:51:32 often get things wrong. Sure. And so I do think it's one of these things where, certainly for business processes, take healthcare as an example, you do want to actually have the counterfactual check before you do something that is certainly life or safety. Same is true in financial systems. Typically, people have compensating models that actually either fact check or do the counterfactual before you actually decide. You don't just have one AI evaluation before you do something. A second opinion. Basically, a second opinion. That's even if you think about if you ever see a doctor
Starting point is 00:52:06 and get prescribed medication, there's a reason why a pharmacist has a pharmacist degree is the pharmacist will actually check to see what the doctor is actually prescribed and compare it to what else you're taking.
Starting point is 00:52:20 Right. And the pharmacist will actually stop potential prescriptions if they recognize, wait a second, you're also on this and these two things don't work well together. And that is sort of standard business process today, pre-AI. And I think we're going to need to make sure we replicate that with AI as well. As an example, a lot of the healthcare scenarios that we've talked about, both Nuance and Epic, do have basically fact-checking where you do have the original audio, you do have the transcription, and compares the summary to make sure that for everything that is a reference to a drug or a reference to a dose
Starting point is 00:52:59 or a reference to a particular ailment, it will go back to the transcript and verify that was the exact drug, the exact dose, the exact ailment. And then that's going to be important for all of us as we build these models is to kind of build that type of workflow, similar to what we do with humans. And the loop, and that's where, again, on some of these healthcare scenarios, whether it's with Nuance or with Epic,
Starting point is 00:53:25 ultimately the physician does review everything that's saved or everything that's sent to a patient or in order. And they also make a human judgment call on the output as an additional safeguard. And I think that's similar to what we're doing with GitHub Copilot as well. We never check in code for you automatically.
Starting point is 00:53:45 You always see the code that was produced. You always see the diff. And as a developer, you're always in control. And I think that's going to be important to kind of embrace as a mindset. At the same time, we are going to continue to kind of make the models better. And even if you look today versus six months ago
Starting point is 00:54:03 or versus 12 months ago, hallucinations are going down. They're not zero, but they are definitely going down. And I think you're going to continue to see models evolve where when you ground them with data, similar to what we've shown here at Build, you could also further reduce the hallucinations. And even with our AI content safety, we're both looking at inputs, like you mentioned PromptGuard, but we're also looking at outputs. And so every single model that you use through Azure AI, whether it's OpenAI, whether it's Lama or Mistral,
Starting point is 00:54:38 goes to the Azure AI content safety system, both for inputs and outputs. And that's super important because you kind of want to check both the inputs and you want to make sure that the outputs are appropriate as well. And even some of the things that we did announce today, like we call them custom categories. Previously with the Azure AI Content Safety System, you could look for things like sexual content or violence
Starting point is 00:55:04 or things that you could set safeguards to make sure it never did. You can now create a custom category. And so things like overconfidence would be a custom category that you could introduce. And you could basically build in the safeguards that say, I don't want you to actually answer this prompt saying, you should for sure do this.'t want you to actually answer this prompt saying you should
Starting point is 00:55:25 for sure do this. I want you to hedge or make sure people understand for this scenario that you can't be entirely precise. And that can now be plugged into the safety system specifically so that you can ensure that you don't get an overconfident answer. And you can take that, because it's custom, you can now plug in as a developer a variety of category safeguards that, again, run every time you execute your model. So on a technical level, how does that work? If I set a safeguard on an output
Starting point is 00:55:59 that says it can't be violent and it comes out a response, right, and it's deemed violent by whatever that information is. Does it then reject and just run another inference? Is it going to like loop until you get a non-violent or how does that technically work? So as a developer, you can set controls on the API endpoint. And so, you know, take, for example, take even the healthcare scenario, you know, there's certain words that are probably appropriate when you're seeing your physician for a annual health check that would not be appropriate in a typical office conversation.
Starting point is 00:56:38 Right. And, you know, you can't talk about body parts there. You can talk about body parts here. And so, you know, there's also a sliding scale, you know, similarly for violence, you can't talk about body parts there. You can talk about body parts here. And so there's also a sliding scale. Similarly for violence, if you have a customer support chat bot, you probably don't want to be talking about axes. If you're playing a first-person shooter game, you might. Or if you sell axes, perhaps you'd... Or you sell axes, yeah. And so you can tailor the language and the use case,
Starting point is 00:57:06 and it's sort of effectively a slider in terms of how lock it all down or for these specific cases. And then to your point in terms of what happens when the safety system triggers, and it gives you back a score of risk. So you can effectively say, okay, where is the threshold
Starting point is 00:57:23 once I've set the right safeguards? And you could basically ask it, okay, generate a new response. Or you could basically kick in and say, hey, I think your question I'm not allowed to answer. Or can you rephrase the question slightly differently because I'm detecting something. I'm getting violent over here. Yeah. So you can effectively now decide that. And then the other thing that we showed in the demo today is we can now even trigger an alert that can integrate into your security ops system. And so if you have a SecOps team that's monitoring your website, maybe in your CISO office, your chief security officer, you can even have an alert. And so if you think you're being attacked, same way that someone might be trying to do a DDoS
Starting point is 00:58:10 against you or looking for script injection attacks inside your website, you can now trigger automatically. I think someone's trying to jailbreak me and it will automatically feed into our Microsoft Defender products. And you can actually see not just the attack on the AI, but you can look from this IP address, what other traffic is going on in my site, because there's a decent chance they're trying a whole bunch of things to potentially get into my system. And so, you know, we're now have this all automated so that, you know, you can start to bring in security professionals into your workload as well. And I think that's just sort of a natural evolution from where we were a year ago, when we were just starting, you know, first of all, how do you build
Starting point is 00:58:46 your first app to now, okay, how do we integrate SecOps? How do we actually integrate much more nuanced and rich safety and security systems? And you're going to keep us, you know, we're not saying we're done. There's still a lot more that we need to keep evolving because, you know, we're all going to collectively learn new things as people build these types of AI apps going forward. Let's close with this. It seems like it's the ultimate shift left for AI, chatbot, agent developers, co-pilot developers, whether they're literally developers or engineers or someone who's learning how to use no code or low code tooling to build AI agents. This notion of risk, this notion of groundedness seems like the ultimate shift left because you want to have that safety and security.
Starting point is 00:59:35 And even in the demos we saw, we saw that happen right in real time when they were developing which sources, which actions, which topics. And you saw that risk and groundness right in there. Like that's what you want to see. You don't want to just say, create this thing and does it work? But it also has, okay, risk parameters, groundedness parameters. Did the original context come into the play? And then I think there was like percentages and stuff like that.
Starting point is 00:59:58 To me, it seems like the ultimate shift left for this, to put it out there safely, not just securely, but safely as well. And even tying it back to that, you know, when we talked about prompting at the very beginning, a lot of this is both, how do you do everything you just mentioned in terms of baking and safety and security and looking at, and then to your point on shifting left, how do you automate all this as part of your CICD process? So that every time you make a code change, every time you make a prompt change, how are you running unit tests, checking groundedness, doing evaluation, doing safety checks, you know, running thousands of jailbreak attack attempts, uh, using AI to test AI. Um,
Starting point is 01:00:39 and you know, that, that was part of what, you know, we showed and I think is a natural evolution of this sort of shift left mindset of let's, we know how to do continuous integration. We know how to do unit testing. And we know how to do that with GitHub and with the ES code and the tools that as developers we live in every day. And, you know, that's probably why in the demos we showed we didn't say here's a different unit test framework or here's a different CICD system and said no, let's use GitHub Actions. Let's actually use VS Code. Let's actually use the techniques that we know
Starting point is 01:01:15 like source control and CICD gates to now integrate AI in a very, very natural developer-friendly workflow kind of way. Exciting times. We really appreciate it, Scott. Sitting down, talking to us has been enlightening. It's been great to be here. Nice to meet you.
Starting point is 01:01:33 So cool. Thanks for the conversation. Thank you. Okay, so this was part one of two parts of our time at Microsoft Build 2024. Exciting times. Also tired of hearing the word slash acronym, technically, AI. Are you tired of it?
Starting point is 01:01:55 I mean, I'm not tired of it generally because I do use AI on the daily. And I find it very helpful in many respects, in many ways of the life I live and the work I do. I like it, but I'm kind of tired of the word. We asked Scott this, he liked it. And we also asked Mark Racinovich the same question. And what you'll see on part two next week. But Scott Guthrie said in his keynote on stage, every app will be reinvented with AI. We learned a little bit today here in this conversation, but I'm curious what you think.
Starting point is 01:02:33 Hit us up in Slack, changelog.com slash community. Everyone is welcome. No imposters. Hang your hat, call it home and hang out with fellow nerds who just like tech and like software and like to hang out together. And I'll see you there. Of course, I want to thank our sponsors for today's show, Chronitor, Neon and Express VPN. And also a massive thank you to our friends and our partners over at fly.io. Launch apps, launch databases, and of course, launch your AI near your users. I mean, it's cool. Fly.io.
Starting point is 01:03:14 And to the mysterious beat freak in residence, Breakmaster Cylinder, bringing those banging beats that we love so much. Thank you, BMC. Thank you. That's it. This show's done. Stay tuned for next week when we have our part two we'll see you then Thank you. Game on.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.