Behind The Tech with Kevin Scott - Year in Review 2024

Episode Date: December 10, 2024

As 2024 comes to an end, we take a look back at some of the biggest themes that emerged on Behind the Tech over this incredibly exciting year for tech and AI: creativity, education, and transformatio...n. And we take a stroll through some of Kevin’s obsessions – from ceramics to Maker YouTube to classical piano – alongside guests like Xyla Foxlin, Lisa Su, Ben Laude, Ethan Mollick, Refik Anadol, and more.  Kevin Scott    Behind the Tech with Kevin Scott    Discover and listen to other Microsoft podcasts.    

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Behind the Tech. I'm your co-host, Christina Warren, Senior Developer Advocate at GitHub. I'm Kevin Scott. It is that time of year once again for our Year in Review episode. Obviously, this past year, AI was a huge part of the conversations.
Starting point is 00:00:27 Like probably every single episode, we did either touched on AI or dug in really deep into what's happening in AI right now, or some of the really exciting possibilities that is opening up as we see it out in the world now for a year and a half, maybe two years, almost two years. But maybe the most exciting thing that we got to look at with AI this year was how it connected to creativity, especially in music and art. This is of very high personal interest to me. I think there is this beautiful thread of creativity that goes through what we all do,
Starting point is 00:01:07 whether we're an engineer, an artist, a teacher, just pick your profession. Everybody has to be creative in their own unique way. And it's been really exciting for me to see how this new tool that we're building, generative AI and the infrastructure that surrounds it, is being used in such phenomenal ways to support that creative impulse that everybody has. Just last month, we talked to Rafiq Anadol, who is creating these incredible huge art installations using data sets of everything from weather patterns to heartbeats. Digital artists like myself, the movement that's enjoying computation, games and creativity with AI and so forth,
Starting point is 00:02:02 we were the blind spots for the museums and galleries, super honestly. And it's quantifiable for many reasons, because it's a new movement, it's a new reaction to the light field. The art world has a concrete base, like centuries old techniques and tools. And it's this revolution,
Starting point is 00:02:22 it's this renaissance happening right now. We are all living in. But as an artist, I was so naturally saying this all the time that I witnessed the birth of internet, web one, web two, web three, AI, quantum. I mean, that's naturally that that's what I reflect back as a form of imagination. So I think there was this rejection for a while, I would say, or blind spot. But then as soon as the more we created works that, you know, bring people together, like our project in Walt Disney Concert Hall, which was also the back in time MFA project, thanks to Lily Cheng, that she was also like one of the early mentors and advisors.
Starting point is 00:03:03 And suddenly that became a reality that brought 100,000 people together. That became a tangible idea. Or Casa Batlló project in Gaudí's building in Barcelona, 65,000 people. Or at MoMA, our show received 3 million people, the largest audience in the MoMA history, with a 38 minutes average viewing.
Starting point is 00:03:22 And that all I think made these tangible results. And as soon as they become an experience in life and memory, I think that really binded the context into like from a dream to reality. Rafiq's work is so incredible. It lets you see what it can mean for AI and humans to truly collaborate. Really the AI can process and interpret a volume of data that's, you know, just ways beyond a human's capacity. But the human is, I think, as Rafiq puts it, is then getting to use that data as a paintbrush, which I think is really beautiful. Yeah, and this is where I get so excited about this moment right now that we're living through, seeing this explosion of creativity that's happening with people like Rafiq. And I just love to hear the ways that artists are thinking about this, just fascinated by the creative process and, you know, obviously the engineering process of these things. Not everybody might know this about me, but I'm a huge classical piano nerd.
Starting point is 00:04:19 And a couple of episodes ago, I got to really dig into this with a fellow piano nerd, Ben Laude. I want to talk about the, you know, the instrument and the art is like, you know, two different things, because I think, you know, when you think about something like AI, like if you think about, you know, AI is both instrument and art together, like you just are getting confused. But if you think about AI as both instrument and art together, like you just are getting confused. But if you think about AI as an instrument for an artist to use to go make something, it becomes altogether interesting. And so I was watching this morning,
Starting point is 00:04:58 Murray Pariah doing a masterclass in 2022 on the G minor ballad, Chopin, which is my favorite piece of music. And Murray, like one of his performances is my very favorite performance of that. And, you know, like, and like, there's a, you know, the part of that ballad that is, that moves me the most is like the lead up to bar 106 you know so like you know when you release all of the tension it's like that double fortissimo it's the big dramatic moment uh in the middle of the piece and the what he was asking this student to do is like there's this chord in the lead up to 106 and he's like what does death sound like and he's like this is
Starting point is 00:05:46 death and he's like you know you need to like have this passage this line that you're playing you know like be foreboding and like as if death is chasing you and not everybody has that in their mind like when they're playing that particular passage. It's also interesting that there's almost no spoilers in classical music. It's almost the reverse. It's knowing what's coming that builds the anticipation and the goosebumps. And for me, it's knowing a piece really well that makes, if it's a great piece, that makes repeated listenings so meaningful but yeah it's interesting to compare what you're describing to ai and you're you would know much more about this than i would
Starting point is 00:06:30 but at least from what i can tell classical piano and the literature and the the art of interpreting it i mean these are expressions of human consciousness uh yes at chopin's first ballad is expression of his organized consciousness and he he's expressing something. And the only way we can agree about the piece is if we just speak in generality as well. It's dramatic. It seems to tell a story, whatever. But the moment you get into details and you want to talk about how this phrase should be rendered, it's death for Murray Pariah. It's life affirming for somebody else. It's dark for this interpreter. It's, it's, it's bright. I mean, it's dry for Glenn Gould. It's wet for Horowitz, right? It's, there's just suddenly the interpreter's consciousness is then mixed with the composers and you get a new cocktail of whatever thing we can't describe, you know? And yeah, I mean, maybe one day we will be sort of comparing Horowitz and Pariah's Chopin Ballade to AI's different version of it. And we can input, well, I want to hear an AI sort of play it, play it with this kind
Starting point is 00:07:37 of expression. I don't know. I mean, I'm not, you might have a comment on that. Is that coming? Should we be concerned? No, no, look, I, so look, I've had this conversation with people and I don't think it does because I think the point of a thing like classical piano
Starting point is 00:07:56 is you have something inside of you that you're trying to express that's difficult or impossible to express any other way. And it's like part of, you know, your humanity and you are, you know, it has meaning if you just play it for yourself. And it has a different meaning if you play it for an audience who are going to receive it in probably a different way than you are maybe even intending when you play it. Right.
Starting point is 00:08:27 Like, I never thought of death before hearing Murray's performance until I saw him teach that after class. So, like, that's not the thing I'm thinking of. It's like, you know, just this incredible emotional response that I get to it that I can't really put words on. And, like, I think that's a beautiful connection that, you know, you've made, even though maybe that's not even what he was intending to do. Okay, let's talk a little bit about another one of your passions, which is
Starting point is 00:08:55 learning and education. And this is another place where I think AI is kind of breaking the field wide open and essentially really transforming the way we think about it. And you know, and of course, when we talk about AI and education, that kind of the first instinct for many folks is to worry about cheating. That's where everybody's mind always goes. But I think so many of our guests this year have really helped bring a different perspective to the conversation. Yeah, absolutely. It was super cool that we got to talk
Starting point is 00:09:25 with Sal Khan about this. He's the founder of Khan Academy and definitely a person who's been leading the way in online education for many years now. But I think one of the interesting things about generative AI in education is some people's knee-jerk reaction to it has been, oh my God, this thing is bad.
Starting point is 00:09:45 Let's get it out of the classrooms. It's just going to help students cheat. And you've got a very different take on it. I think informed by all of the leverage work that you've been trying to do over the years with the core of Khan Academy. So talk about that a little bit. Yeah, big picture, even broader than education, technology just amplifies human intent. And if your intent is to be evil, you'll find ways to make the technology evil. If your intent is to be lazy, you'll find ways that technology can empower your laziness. But if you want to learn, or if you want to help people learn, there's always ways that technology can be valuable. You know, the same video technology that might, you know, have people watch not so great stuff, we can also use to teach them. And so it's all about how do you mitigate the harms
Starting point is 00:10:35 and maximize the benefits. And I tell everyone who is a well-intentioned person, just checking out and running the other way, that just means only the bad folks or the lazy folks are going to be using technology, especially now these very powerful technologies like generative AI. And so it's obvious things like cheating, and then, you know, there's issues sometimes with AI potentially around maybe bias, errors, hallucinations. What if students want to use it for unproductive ends? They want to, you know, help making a bomb or something like that, or they want to harm themselves. So what I told our team at Khan Academy is like, look, those aren't reasons not to work with generative AI. Those are reasons to just put guardrails around it and turn those into features. Let's make it so the teacher can see
Starting point is 00:11:18 what the students are doing if they're under 18. Let's make it so our AI doesn't cheat, but it can Socratically nudge you in the right direction. Let's make it so our AI doesn't cheat, but it can Socratically nudge you in the right direction. Let's make it so that we can support students in, say, writing an essay, making the student do the work, but acting as an ethical writing coach. And if the student goes to chat GPT or someplace else to get their essay written for them and brings it into our system,
Starting point is 00:11:38 then our system, when it talks to the teacher, is going to say, well, you know, Kevin and I didn't work on this essay together. And by the way, it's not consistent with his other writing. We should double click on whether Kevin really did this work. So I actually think the AI can actually be used to undermine AI cheating itself. So any tool can be used for good or for bad. And so that's kind of like the big theme of the book. Here's all of the ways that can be used well. Here's all of the fears and risks that people have, but here's how we should mitigate those and actually turn them into
Starting point is 00:12:08 benefits. What is your advice for people as they're sort of thinking about maybe not even just AI, but, uh, like, you know, we, we have an interesting, uh, you know, future headed our way, uh, because some technologies like AI are, are developing like really, really quickly. Um, you know, you, you have done a really tremendous job in your career using technology to like help yourself you know so like computer science is like was your gateway into you know big silicon valley companies and startups and like eventually into like a hedge fund like technology is sitting at the center of this non-profit that you've created that's having big impact. Like what's your advice to people, you know, for the future? You know, my advice is, and I write about this in the book, there's people who think, oh, well, calculator exists.
Starting point is 00:13:19 Kids don't need to know arithmetic or computers exist. You know, there's one less thing that you have to learn how to do. The Internet exists. Search exists. You don't have to know arithmetic or computers exist. You know, there's one less thing that you have to learn how to do. The internet exists, search exists. You don't have to learn knowledge anymore. And now with AI, people are like, well, do people even learn how to write, et cetera. But I always point out, if you look at any of these inflection points of technology, it has accrued the most benefit to the people with the deepest skills. And so I think the answer is this is a reason to double down on for sure the traditional
Starting point is 00:13:48 skills the math the reading and the writing um but also now augment that so that you learn how to creatively use these tools that can really amplify you i mean giving you almost godlike powers uh to do things that you know would have looked like science fiction even five ten years ago uh so and and and i also write in the book like this isn't like a nice to have it's an to do things that would have looked like science fiction even five, 10 years ago. And I also write in the book, this isn't a nice to have, it's an imperative now because the status quo, unfortunately, most people aren't going to be in a position to leverage the AI because the AI is better than... We're already seeing the AI is operating at the 80th percentile of the LSAT. I would be worried if I was a 50th percentile lawyer of where this is going. Now, if I'm a one percentile lawyer, I know that there's
Starting point is 00:14:31 certain things, yeah, the AI can help me draft a contract, et cetera, but I have certain expertise. I've fought certain cases. I know the nuances that no AI can have. You're going to be super powered. You're going to be able to get the AI to write your contracts. Maybe you'll hire fewer paralegals or whatever, but your expertise is going to be magnified even more. While if all you could do is draft a boiler point contract, you're going to be in trouble. So more people, I think the job market is going to broadly become kind of bipolar. The knowledge economy, if you want to be in the knowledge economy, and that's probably where the bulk of the value of AI is going to accrue,
Starting point is 00:15:10 you need to upskill even more. And hopefully, maybe you can use AI to help you get there. Use Conmigo, use, you know, Con Academy. I think there's also, you know, people shouldn't panic. I think even if you can't be a 1% lawyer, I also think there's going to be a lot of, let's call it very human work that as we have a 1% lawyer, I also think there's going to be a lot of, let's call it very human work that as we have a more abundant society, we should have more resources so that we can have, you know, more caregivers, more people to fight loneliness, more people to, you know, provide health, help to the sick or to the elderly, whatever. So I think there's actually going to probably be work there too. And here's maybe an interesting behind the scenes tidbit for listeners here. We actually
Starting point is 00:15:48 used Copilot as we were starting to put this year in review episode together. And it was able to identify some of the high level themes that came up all over with our guests this year. And that's a lot like the ways that educators like Saul Kahn talk about using AI. And it's this idea of co-intelligence, which was certainly one of the themes that came up over and over again. And it's something that you talked about with Ethan Malik, who's a professor at the Wharton School. Yeah, I really appreciate the way Ethan super clearly talks about this. So at least at the current stage, AI really works like a form of co-intelligence. Like it is a booster to your activities.
Starting point is 00:16:26 It is a threat to some parts of your job, but not the parts you want to do. And it is something that it's usable right now. And I think a lot of people, a lot of the books about AI have tended to focus on future and especially a sort of scary versus, you know, like, are we all saved or all doomed?
Starting point is 00:16:40 And I think that that is an important conversation, but in some ways, the least interesting conversation to have about AI that's already here. And it's fascinating because when you talk to people who are using it, they want to talk about how to use it. It feels like that 80s again, right? Like people want to figure out what are the tips, they're exchanging information, there's excitement in the air among users. And I think that I wanted to try and bring that conversation to people and give people ways of getting started. And also to realize like, this is kind of a big deal, right? It's a big deal in lots of ways that we would never have expected AI to be a big deal.
Starting point is 00:17:08 And it's a big deal right now. Like it out innovates most innovators. It outwrites most writers. It like, you know, elite consultants, it does a really good job. Like this is weird stuff that is going to have weird effects. And it is accessible. And that's part of why it's going to have such weird effects. So let's bring this conversation back into the physical world for a moment here. Maybe we're just going through all of your hobbies and obsessions, which we love heaven. But the next one is the world of makers on YouTube.
Starting point is 00:17:35 And earlier this year, we talked with Zyla Foxland. Oh my God, I was so excited to talk with Zyla. She makes just the most incredible stuff. And I think she's also obsessed with learning and education and really breaking open the ways we traditionally think about how we learn. I think that when I was a kid, it was sort of like you were interested in engineering
Starting point is 00:17:57 or you were interested in art. And it was a little bit gender segregated, but it was also just like those were separate categories. And the maker movement has kind of combined the two. I was never good at classroom learning. I have to physically do something to really understand it. Or I have to apply something to a project. And then I'll really understand why or how something works.
Starting point is 00:18:19 If a kid gets a chance, whatever brings the chance, but if they get the chance to discover that they are interested in and good enough at something where they just want to go put the work in, like you were talking about before, where you get better and better and better, like that virtuous cycle is sort of the most important thing in the world. And like the thing that the thing that is so unfortunate is how often kids don't even get a chance to get on the trailhead. Like they just like they don't have your wonderful fifth grade teacher or, you know, like my bad parenting and Grey's Anatomy. And like they just don't figure out that that like, hey, here's the hook. And that, you know, hard things are hard, but you just got to be interested enough to go do the work to get good.
Starting point is 00:19:15 Yeah, and they have to be in a safe enough environment where they can try things and they can fail. And I think most school systems are not like that. And so it takes, it takes like a family environment or a really special teacher, like you said, to, to create that environment. And it's just so hard to do in mass. Yeah. Yeah. I guess it is with math, like hard because the way that we grade mathematics is like, you know, there's a right and a wrong answer to a problem. And if you get it wrong, like oftentimes
Starting point is 00:19:46 you don't get an awful lot of feedback about what to do to get better. And then you just see the bad score at the end and it's like, okay, well, I'm bad at this, which is. Right, like it feels very black and white until you get up to like calculus or even pre-calc where now you're having, or geometry actually is a great one
Starting point is 00:20:03 where like you start, you can get halfway through proving theorem or you can kind of get most of the way there. And then it feels like there's some kind of progress. It's not like a multiplication problem where you either got it right or you got it wrong. With math, like I get frustrated with how we teach it because we sort of very frequently introduce that the mathematical concepts uh absent any kind of
Starting point is 00:20:30 motivation for why this thing is important uh and like that's not how the mathematical concepts are invented i mean there's some things and you know pure mathematics that you know get invented just for the sake of uh the math but like most math got invented because somebody was trying to solve a problem and like they needed a way to model something in the real world or, you know, and like, we just don't do a good job sharing that with, uh, with kids early, unless you have an exceptional teacher. Right. Right. But even an exceptional teacher is working within the bounds of the fact that they're like teaching math class and then the students are going to leave class and go to english class and they're going to leave class
Starting point is 00:21:07 and go to like science class um and there's only so much at least in my experience like there's only so much the math and the science teachers can do to make their curriculums match up especially if you're trying to meet state regulations and state requirements for for testing and stuff yeah i think even in engineering school, when you'd think like the whole curriculum is designed to be applied to the real world, the math was still so separate. Okay, so while we're still here in the physical world, we also got to talk with someone who is deeply connected with the physical hardware that powers everything that we do in the virtual world. Your conversation with Lisa Su of AMD was fantastic and really got into some of those questions about the physical world
Starting point is 00:21:50 and hardware versus software. Yeah, Lisa was teasing me a little bit about the different paths she and I have taken in this classical, sometimes almost comical tension between hardware and software people. No offense, software is very interesting, but at the time, hardware was much more sexy to me. And I had the opportunity to see how you could build chips and build very, they weren't the most advanced chips in the world, but to me, it was amazing. It was amazing that you could build some transistors on something the size of a coin. You could look at it in the world. But to me, it was like amazing. It was amazing that you could build, you know, some transistors on something the size of a coin, you could look at it in the microscope, you could see you could measure it on a test system. And, you know, that's, that's how I got into hardware.
Starting point is 00:22:37 And that's how I got into semiconductors, actually, it's so important to see the results of what you're doing. And, you know, I love the fact that I can build products that I can touch and feel, and walk into Best Buy and see those products, or walk into your data center and see those products. So that's what I enjoy. So something that's just so honestly mind-boggling about the moment that we're in right now is how these tools are not just adding a little bit onto our past capabilities, but they're multiplying and transforming what's possible.
Starting point is 00:23:10 We talked to Mike Volpe about this. Humans are, in some sense, the pinnacle of biological technology. Like, we are, as far as we know, at the top of the pyramid of biological technology. And I felt like a system would not copied it, but attempted to emulate how humans worked and did so via a computer system had to have a really, really bright future. Like it just was, it was sort of like saying the horse is the best instrument we have to move around. If I only could build a mechanical system that mimicked a horse, I could actually improve productivity a lot. And somebody invented a car, right? And in the same way, a human is the best brain system that we know of.
Starting point is 00:24:00 And if you try to build, you know, a car is not a horse. They're very different instruments, But they serve a similar purpose. And that's at least how I think of AI, which is actually AI systems don't really work like the human brain. They sort of loosely, but they serve the same benefit at the end. They just because of the nature of scaling and computing, they can be much bigger. What do you think is interesting going forward, either in AI or, you know, like anything else that's sort of happening in technology that you think is interesting that people ought to be paying more attention to than they are? Look, I think there's a couple of things on the AI side that I pay attention to. One is the physical embodiment of AI, which is interesting.
Starting point is 00:24:50 I think the AI we experience through Bing Chat or ChatGPT or Cohere or whatever is the purely digital experience right now. And I think that, you know, we are analog beings, and at some level, there needs to be a physical experience associated with that of some variety. So whether it's robotics or it's devices or other things that allow a more physical embodiment of what we perceive to be AI, I think is super interesting. I do pay attention to technologies other than transformers. I think this is an investor bias. I don't see how startup investors can win in transformers now anymore because of the capital requirements of it.
Starting point is 00:25:48 And I would say for now, there is no obvious scaling boundaries to transformers, but maybe there might be. And if so, might there be a different approach? Maybe, maybe not, but it's my job to explore. Yeah, and for listeners who may not be AI people, like when Mike's saying Transformers, he's not talking about Optimus Prime.
Starting point is 00:26:13 This is the prevailing architecture for deep neural networks that is basically driving all of this crazy scale-based progress right now. Yeah, exactly. I mean, everything that everybody experiences today in AI is largely based on this technology called transformers. And it has very good scaling characteristics, meaning that if you throw more computing power at it, it just gets better and better and better. That's not generally true with technology. Technology usually has a plateauing effect.
Starting point is 00:26:45 Like you can throw more resources at it, but the pace at which it improves flattens over time. And this particular technology has not shown characteristics of flattening so far. But it also means that the resources required get massive. So it means a lot of power, a lot of computers, a lot of data centers, all that stuff. Very hard for a small company. It's really exciting to start to think about and imagine what all the creative people out there are going to do with this stuff in the next
Starting point is 00:27:14 few, say, five or 10 years. Yeah. And that makes me think of something that Ethan Mollick said, that I think he had a really great way of thinking about this. And the model I was thinking about is the industrial revolution. And in the way people don't usually think about, which is steam power came to a lot of factories in England at the same kind of time. The ones that won were not the ones that were like, hey, we could still make pots, but with less people. Those companies got destroyed, right?
Starting point is 00:27:40 The ones that succeeded were the ones that we can now use the same number of people and make 10,000 more pots and ship them all over the world. Yeah, absolutely. That's the metaphor we should be thinking about. And let's bring in one more guest here, Mike Schreffer. Schreffer talks about technology as leverage. I love the idea of leverage. I mean, technology is leverage.
Starting point is 00:28:00 You know, I always say that technology is one of those few things that removes constraints. So many problems in life, if you've ever like the economics 101 you take in high school, where it's like, all right, you have $100 city budget. You can either fund the libraries or the police or the fire department, but you can't fund all three fully. Like a lot of people live in a world every day where our problems are trade-offs. I can do this or I can do that. And technology is one of the only things that's like, oh, hey, it's now half the price.
Starting point is 00:28:31 Okay. So, Kevin, if you had to sum up all of these amazing conversations that we've had over the podcast this year and think about this moment in technology, what comes up for you? I think it's really clear at this point that we're in a moment of platform shift. It's not just that technology is becoming incrementally faster or more efficient or cheaper. It's very dramatically changing and breaking apart the ways we operate, whether that's in creative things, software, medicine, everyday life, education. And I think it presents some really exciting opportunities to address big,
Starting point is 00:29:05 thorny, complicated challenges like climate change. Let's play one last clip from Shep here, because I think he put it really well. Yeah, it's really interesting. I think there's a related thing with smart people where sometimes it's very enjoyable to wallow in complexity, to like take a very hard thing and like even to make it harder. There's this joy you can get from spinning cycles there. But those overly complicated things are almost never really useful. 100%. I think I often describe when I'm working with people, this took me a long time to figure out,
Starting point is 00:29:43 is I think there's complexifiers and there's simplifiers. There's someone you give a big hard problem, they go like, here's 30 pages, but you really only need to understand three things. Here's the three biggest things that matter here. And if you want to get into details, I got it, but here's the thing. There's other people that come up with, here's 26 pages of detail. I've covered every base on this thing. And you're like, that's not actually helpful. That's actually much worse.
Starting point is 00:30:08 And I find simplifiers are a secret weapon of a lot of organizations. Yeah. So what we sought in our PMs have met, that's what I look for in the founders I back. And it like repeatedly has, has, has been successful for me and finding people take a big complex gnarly thing and say, but these are the only, you know, only things that matter. Yeah. I mean, I feel like you're giving the uh given the listeners sage advice here so like you compound these things and they get very interesting so folks who have high learning rate who know how to like experiment quickly who are simplifiers like you just sort of stack these together and like those uh like really really the union of those things are just superpowers.
Starting point is 00:30:46 Yeah, 100 percent. Climate is this thing that's going it's a platform problem. It's going to impact tens or hundreds of millions of people. And the people most impacted are the least equipped to deal with the impacts. And so it's like, here I am with a bunch of resources. Like, isn't it just an obligation for me to go off and do this? Hearing that, thinking about the things I'm thinking and seeing and learning, it leaves me feeling hopeful about what's coming next. So I've got to ask you, are you going to make any New Year's resolutions, any predictions for what's coming in 2025? Oh, God, that's so hard. I think one of the mistakes that maybe it's
Starting point is 00:31:31 hubris that we all make is just being quick with the predictions. I think the thing that's just super clear is the AI platform shift will keep rolling, and I think if anything, we'll pick up speed. I look at the time that we're recording this, we've got brand new models that have hit the hands of developers like 01 that are just extraordinary leaps forward in capability.
Starting point is 00:32:02 I'm already seeing the amazing things that people are doing as soon as they get access to that. And so, you know, the year ahead, we will have even more capable models coming and we'll have more exciting things that people are doing with it. And, you know, I think given that some of the AI applications are kind of trailing indicators of where the model capability actually is, that next year we're going to start seeing some really incredible things happen that have just taken a while, a couple of years to percolate through the system and get ready to launch and have wide-scale impact. I think it's going to be a super exciting year.
Starting point is 00:32:45 And, you know, like my personal New Year's resolution is to try to continue every day to come in, learn something new about how people are using AI in creative ways and to try to do something just a little bit creative every day myself. I love that. I think that's a great resolution. And I might take some cues from you and try to do something similar.
Starting point is 00:33:17 I really like that. Okay, that is all of our time that we've got for today. Huge thanks to all of our guests on the podcast this year. And you can check out all of their full episodes on YouTube or your favorite podcast platform. And if you have anything that you would like to share with us, please email us anytime at behindthetech at microsoft.com. Thank you so much for listening.
Starting point is 00:33:38 See you next year.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.