Behind The Tech with Kevin Scott - Year in Review 2023

Episode Date: December 12, 2023

As 2023 comes to an end, we take a look back at some of the year’s most compelling, thought-provoking conversations with guests like Bill Gates, Neil deGrasse Tyson, OpenAI’s Mira Murati, Shopify�...��s Tobi Lutke, will.i.am, Sci-fi author Adrian Tchaikovsky, and many more. The year’s programs featured a wide range of topics including equity in tech, advocating for science and, of course, the transformative power of AI.   Kevin Scott  Behind the Tech with Kevin Scott  Discover and listen to other Microsoft podcasts.   

Transcript
Discussion (0)
Starting point is 00:00:00 . Hi everyone. Welcome to Behind the Tech. I'm your host, Kevin Scott, Chief Technology Officer for Microsoft. In this podcast, we're going to get behind the tech. We'll talk with some of the people who have made our modern tech world possible and understand what motivated them to create what they did.
Starting point is 00:00:19 Join me to maybe learn a little bit about the history of computing and get a few behind the scenes insights into what's happening today. Stick around. Hello and welcome to Behind the Tech. I'm Senior Developer Advocate at GitHub, Christina Warren. And I'm Kevin Scott. And here we are at the end of 2023 already.
Starting point is 00:00:43 And what a huge year it's been in our world. It's hard to believe how much has changed this past year. What with like the widespread adoption and explosion of AI into the world. Yeah, it has been a crazy and big year. I sort of feel like we're like living AI dog years or something where, you know, I just can't even believe how much has happened in the past six months, much less the whole year. So, obviously, in the past 12 months, we've had ChatGPT released, we've had GPT-4 released, and then we've had this crazy explosion of generative AI activity across the board from Microsoft thing is happening and that everybody is excited about it. And the smartest and most ambitious people are putting all of their energy into figuring out what this all means for them and like what useful things they're going to try to go make for other people on top of things that are newly possible.
Starting point is 00:02:04 So it's a super exciting moment. It's an extremely exciting moment. I've been describing this to people for basically a year now. It's like this is another iPhone moment. And that only becomes more and more true, I think, all the time. And on the podcast this year, it's been really exciting to hear folks talk about, as you say, what's been happening now and the stories behind the years and years of work and research that have gotten us to this place. Because of course, it feels like it's all happening right now, but this is the buildup of a lot of hard work. So back in the spring, you got to talk to Bill Gates, who obviously is a huge force. I think it's hard to express just how huge a force in technology
Starting point is 00:02:47 he has been for decades. Yeah, for sure. And, you know, like, obviously, Bill was one of the accelerators for the personal computing revolution. Like, he played, you know, maybe the most pivotal role there in founding Microsoft in the first place and pushing the personal community revolution forward and played a tremendously important role in the internet revolution, honestly. to spend some time chatting with him about AI and his perspective on like what this moment means,
Starting point is 00:03:27 like how it is similar to some of the things that he's seen in the past. And like, obviously, Bill's also been thinking about AI for his entire career. And so for all of us who have an interest in technology, seeing a thing that has gone from promising but not moving as fast as all of us ambitious people would like to actually making some pretty big breakthroughs has been just, I think, just as exciting for him as it has been for all of us. Yeah, let's take a listen to that conversation. I wonder what your advice might be to people who are thinking about like, oh, I have this new technology that's amazing that I can now use. How should they be thinking about how to use it? How should they be thinking about the urgency with which they are, you know, pursuing these new ideas? And, you know, and like, how does that relate to how you thought about things in the PC era and the internet era? Yeah, so the industry starts really small, you know, where computers aren't personal.
Starting point is 00:04:39 And then through the microprocessor and a bunch of companies, we get the personal computer, IBM, Apple. And Microsoft got to be very involved in the software. You know, even the basic interpreter on the Apple II, very obscure fact, was something that I did for Apple. And that idea that, wow, this is a tool that, at least for editing documents that you have to do all the writing, you know, that was pretty amazing. And then connecting those up over the internet was amazing, and then moving the computation into the mobile phone was absolutely amazing. So, you know, once you get the PC, the internet, the software industry, and the mobile phone, the digital world is changing huge, huge parts of our activities. I was just in India seeing how they do payments digitally, even for government programs. It's an amazing application of that
Starting point is 00:05:52 world to help people who never would have bank accounts because the fees are just too high. It's too complicated. And so we continue to benefit from that foundation. I do view this, the beginning of computers that read and write, as every bit as profound as any one of those steps. And a little bit surprising because robotics has gone a little slower than I would have expected. and I don't mean autonomous driving. I think that's a special case that's particularly hard
Starting point is 00:06:29 because of the open-ended environment and the difficulty of safety and what safety bar people will bring to that. But even factories where you actually have a huge control over the environment of what's going on, and you can make sure that, you know, no kids are running around anywhere near that factory. So, you know, a little bit people are saying, okay, you know, these guys can overpredict, which that's certainly correct. But here's a case where, you know, we underpredicted that natural language and the computer's ability to when they talk about things that might
Starting point is 00:07:27 get a lot more productive, will turn out to be wrong. And because we're just at the start of this, you could almost call it a mania like the internet mania. But the internet mania, although it had its insanities and things that, I don't know, sock puppets or things where you look back and say, what were we thinking? It was a very profound tool that now we take for granted. And, you know, even just for scientific discovery, you know, during the pandemic, the utility of the immediate sharing that took place there was just phenomenal. And so this is as big a breakthrough, a milestone as I've seen in the whole digital computer realm, which really starts when I'm quite young.
Starting point is 00:08:24 Yeah. Well, so I'm going to say this to you, and I'm just interested in your reaction because, like, you will always, like, tell me when an idea is dumb. thinking for the last handful of years is that one of the big changes that's happening because of this technology is that for 180 years from the point that Ada Lovelace wrote the first program to harness the power of a digital machine up until today, the way that you get a digital machine to do work for you is you either have to be a skilled programmer, which is like a barrier to entry that's not easy, or you have to have a skilled programmer anticipate the needs of the user and to build a piece of software that you can then use to get the machine to do something for you. This may be the point where
Starting point is 00:09:25 we get that paradigm to change a little bit, where because you have this natural language interface and these AIs can write code and they will be able to actuate a whole bunch of services and systems that we give ordinary people the ability to get very complicated things done with machines without having to have, like, all of this expertise that you and I spent many, many years, you know, building? No, absolutely. Every advance, you know, hopefully lowers the bar
Starting point is 00:09:59 in terms of who can easily take advantage of it. You know, the spreadsheet was an example of that, because even though you still have to understand these formulas, you really didn't have to understand logic or symbols much at all. And it had the input and the output so closely connected in this grid structure that you didn't think about the separation of those two. And that's kind of limiting in a way to a super abstract thinker, but it was so powerful in terms of the directness. Oh, that didn't come out right.
Starting point is 00:10:34 Let me change it. Here, there's a whole class of programs of taking like corporate data and presenting it or doing complex queries against, okay, have there been any sales offices where we've had 20% of the headcount missing and are sales results affected by that? Now, you don't have to go to the IT department and wait in line and have them tell you, oh, that's too hard or something. Most of these corporate learning things,
Starting point is 00:11:10 whether it's a query or a report or even a simple workflow where if something happens, you want to trigger an activity, the description in English will be the program. And when you want it to do something extra, you'll just pull up that English or whatever your language is in, and type that in. So there's a whole layer of query assistance and programming that will be accessible to any employee.
Starting point is 00:11:46 And the same thing is true of, okay, I'm somewhere in the college application process and I want to know, okay, what's my next step and what's the threshold for these things? It's so opaque today. people to go directly and interact, that is the theme that this is trying to enable. It was incredible to get to talk with Bill, not only about the specifics of how all of this stuff works, but also to get a sense of the big picture for this historic moment that we're in, as you alluded to, Kevin. Yeah, I mean, it has been a very long and winding road to get to where we are right now. And, you know, you said it earlier, this is, you know, may feel like, you know, all of a sudden it broke through and it was very abrupt, but it was because of decades and decades of work that just an untold number of people have been doing.
Starting point is 00:12:50 But it has been really interesting working with the folks at OpenAI. We had this incredibly interesting conversation with Mira Marotti, who is OpenAI CTO, about her perspective as sitting effectively in the hot seat of developing one of the most interesting pieces of technology that's resulted in this breakthrough moment that we've had this past year. For sure. Let's take a listen.
Starting point is 00:13:23 You know, the first time that we thought about deploying these models that were just in research territory was kind of this insane idea. It wasn't normal back then to go deploy a large language model in the real world. And, you know, what is the business case? What is it actually going to do for people? What problems is it going to solve? like we didn't really have those answers but we thought you know if we make it accessible in such a way that it's easy to use and it is cheap to use it is highly optimized you don't need a lot of you don't need to know all the bells and whistles of machine learning, and just accessible, then maybe people's creativity would just bring to life new products and solutions, and we'd see how this technology could help us in the real world. And of course, we had a hypothesis,
Starting point is 00:14:26 but really it was just putting GPT-3 in the API the first time that we saw people interact with these large language models and the technology that we were building. And that for so many years, we had just been building in the lab without this real world contact and feedback from people out there. So that was the first time. It was sort of this leap of faith that it was going to teach
Starting point is 00:14:54 us something. We were going to learn something from it. And hopefully we could feed it back into the technology. We could bring back that knowledge, that feedback, and figure out how to use it to make the technology better, more reliable, more aligned, safer, more robust when it eventually gets deployed in the real world. And, you know, I always believed that you can't just build this powerful technology in the lab with no contact with reality and hope that somehow it's going to go well and that it's going to be safe and beneficial for all. along both in gathering that feedback and insight, but also in adjusting society to this change. And the best way to do that is for people to actually interact with the technology and see for themselves instead of, you know,
Starting point is 00:15:58 telling them or just sharing scientific papers. So that was very important. And it took us then a couple of years to get to the point where we were not just releasing improvements to the model through the API, but in fact, the first interface that was more consumer-facing that we played around with was DALI, DALI Labs, where people could just input a prompt in natural language, and then you'd see this beautiful, original, amazing images come up.
Starting point is 00:16:33 And then, you know, really for research reasons, we were experimenting with this interface of dialogue, where you go back and forth with the model and charge GPT. And dialogue is such a powerful tool. You know, the idea of Socratic dialogue and how people learn, you can sort of correct one another and or ask questions, get really to a deeper truth. And so we thought, you know, if we put this out there, even with the existing models, we will learn a lot. We will get a lot of feedback and we can use this feedback to actually make our upcoming model that at the time was GPT-4 safer and more aligned. was kind of the motivation. And of course, as we saw, you know, in just a few days, it became super popular and people just loved interacting with this AI system.
Starting point is 00:17:32 So when we talk about AI and we think about this stuff, it has been so powerful and so public the way I think that the whole world has kind of been getting to learn about and experience the challenges of this new wave of AI. You know, I kind of feel like we are in this before-ChatGBT, after-ChatGBT world, and that's really been evident this year. Yeah, and we had a really great opportunity on the podcast and and, you know, me personally, in my professional life to have a bunch of really important conversations with Kevin Roos from the New York Times. So, you know, I think journalism has this really, really important role to play in helping frame what's happening right now. So like So we're doing part of the job,
Starting point is 00:18:27 which is develop the technology and try to responsibly get it deployed out to the public. But that's a tiny little part of the job of figuring out how to get AI into the hands of the public in a reasonable way. And so, yeah, I've been super excited to have those conversations with journalists this year. Yeah, this conversation that you had with Kevin is actually one of my favorites this year,
Starting point is 00:18:58 A, because he's a former colleague, and B, because I really do think, as you say, journalists play a really important role in, I think, helping kind of bridge the understanding gap between what it is that is being created and all the sure that you're not getting trapped in your own little bubble. That if you sit around and all you do is talk to your fellow technologists all day long, you can get convinced of some pretty wonky things. And it's like really useful to have journalists and academics and policymakers and, you know, and like in my mom in rural central Virginia, able to like knock you out of your bubble thing. No, I think I think you're exactly right. I mean, I think that that is that's how we make sure that this is stuff that is not just useful in theory. And as you say, in our own bubbles, but is actually something that could be world changing.
Starting point is 00:20:07 So great conversation here. We are building some of the most, I say we, I mean you essentially, and your peers in the tech industry are building some of the most powerful technology ever created. And I think without the media, there just wouldn't be a kind of countervailing sort of, I don't know, I don't know if it's, you know, a force on the minds of the people building that technology or just a caution around the technology. But I'll give you an example of what I mean. So,
Starting point is 00:20:38 you know, you and I had this, this now infamous encounter back in February, where you guys had just released Bing with this, you know, what we now know is GPT-4 running inside of it. And I had this insane conversation with Bing chat, aka Sydney, please don't hurt me, Sydney. And, you know, went on the front page, went totally viral, blew up, I'm sure, you know, your inbox, my inbox, everyone's inbox for months. Everybody's inbox. And subsequent to that, I started hearing from, just in a nutshell, if people aren't familiar, it was a conversation that lasted two hours in which Bing slash Sydney confessed all these dark secrets and tried to break up my marriage. And it was insane.
Starting point is 00:21:26 And subsequent to that story running, I got notes from a lot of other people at tech companies saying, you know, how do we prevent our technology from doing that? Or I even got a leaked document from another big tech company, which had a sort of roadmap for their AI product and listed on the roadmap was like, do not break up Kevin Roos' marriage. And so I really think that like that, and not to toot my own horn, this could have been anyone, but it did really like serve as a cautionary tale for other companies that are building similar technology. And so I think that is what the media can and should do in moments of societal transformation and change is really hold up a sign that basically tells people like, you know, you want to do this right, or, you know, there will be, you know, there may be
Starting point is 00:22:16 consequences for that. Yeah. And I think that is one of the very good things that came out of that experience. And, you know And it also, I think, is another important reason why it is, I think you actually want to launch these things, even if it results in something that floods your inbox for a while, is you just get the societal level conversation about what's know, what's possible, what's going on, like, where's the line, like, what's good, what's bad. I mean, we haven't chatted since that story published. And one thing that I will say is, like, I deeply appreciate the part of the writing that
Starting point is 00:23:00 you did where you published the full transcript of the conversation. So that level of transparency, it just wasn't confusing to anyone about what inputs into the system led to the things that it gave back. And that was super good. I don't think many of the people who were coming into my inbox had read the full transcript. But I think that you're just 100% spot on. The existence of this thing has helped a lot of people just make sure that they're paying real attention to some very important things. And also, some of this stuff is fuzzy, right? Like, you know, where the line is, and like part of what you all are helping doing is making sure that the public is paying enough attention to it so they can weigh in and have an opinion about where the line should be.
Starting point is 00:23:58 Absolutely. I mean, it is an area where I think more public opinion is good. And right now, the number of people who are actually building this technology is quite small relative to the number of people who are using it or who will soon be using it. And so I just think the more people know about what's going on, the better. And I think it'll ultimately be good for the tech industry to have that kind of feedback,
Starting point is 00:24:24 even if it is annoying and blows up your inbox in the moment. Yeah. And for what it's worth, it was like, I was never annoyed about that. You were remarkably chill. I was pleasantly surprised by the response that you and I talked after I had had this conversation, but before I published my story. You didn't freak out. You didn't accuse me of lying. It was a remarkably civil conversation. And I have appreciated that because that's not... And I'm not just blowing smoke here. That is not the reaction that I expected
Starting point is 00:25:04 given how these things can go with other tech leaders. So I guess I'm just hopeful that the response to that article has not, the lesson from that has not been that we should build in secret and should never let anyone try our stuff. I hope that it has been salutary for the whole project of building good and safe and responsible AI to have some feedback along the way. 100%. I don't think anybody was...
Starting point is 00:25:35 I'm genuinely trying to think if anyone was irritated. It's like everybody is sort of like, okay, this is good. Lesson learned. And we have a whole bunch of mechanism for uh you know preventing things like that from happening again and uh i mean it's just just it's it's all good like um somebody said to me at some point that uh like all feedback is a gift like the fact that you spent two hours trying very hard to get this thing to do unusual things and then publish the whole transcript and then wrote this article that helps people pay more attention to the importance of responsible AI, like all of that's a gift. And that's the way you should just sort of look at it.
Starting point is 00:26:32 Yeah, I'll remind tech executives of that the next time I get an angry call from a comms department. Kevin Scott thinks feedback is a gift. So maybe you guys should get on board with that. I did also hear that you guys had Sydney swag made. And I'm a little offended that none of that has shown up in my house. Did you hear that about the beer? So this was my favorite thing that came out of that article was that there was a brewery in Minneapolis that came out with a beer called Sydney Loves Kevin. And I have not tried it yet. I heard it was good, but maybe you and I can get a pint of it sometime. We should absolutely make a road trip to Minnesota to get a pint.
Starting point is 00:27:07 That would be hilarious. It's just so exciting to me how much technology and AI are becoming part of the mainstream conversation. It really does make me think about the episode we did with Neil deGrasse Tyson who is one of my heroes. In addition to being a brilliant astrophysicist, Neil, I think, is, in my opinion, the best science communicator since Carl Sagan. And I think Neil tried to model a lot of what he's doing after Carl.
Starting point is 00:27:43 And yeah, he just does a terrific job of helping explain some incredibly complicated scientific concepts to the public, but not in a condescending way. So I just love Neil to death. Yeah, I do too. He's also one of my heroes. And I think you're exactly right. He is definitely one of our best, if not the best, living science communicator that we have. And I think it's so important that we have people like him in these moments. As you say, I think helps get us out of our bubbles a little bit. And also I think helps put this again into terms that people who might not follow along with every little thing can understand and get excited about and also start to think the important questions about.
Starting point is 00:28:26 So this is a great conversation with Neil. Right, right. Well, actually, so I think much less about kids than I do about adults. Yeah. Because kids are born curious and they retain that curiosity at least into middle school. So I'm not so worried about them as I am about adults who think they're thinking rationally and are not. And adults are in charge of the world. They wield resources. They vote. They lead. They're captains of industry. They're all the things
Starting point is 00:28:56 that shape this world. So my, I, yeah, we can target children, but I'm too impatient. I don't want to wait 30 years until they're old enough to then make a difference. Whereas adults can make a difference with a pen at the bottom of a document that puts a new legislation into effect or new educational directives or new mission statements. So that's been my, uh, my goal now in terms of role model, I think the term is a little overplayed, overvalued, I should say for the following reason. I grew up in the Bronx, New York. And if I needed another black person who grew up in the Bronx before me, who then became an astrophysicist to have preceded me, I would have never become an astrophysicist. So role models, as they're generally thought of, inherently require that you are not the first to do something. But suppose you have interest where you would be the first to do something,
Starting point is 00:30:06 either first out of your town. Like you said, you're the first. Your role model can't have been people in your town because none of them went to college or in your family. So at some point, you have to say, my role model is someone I have never met and I may never meet, but I know enough about them
Starting point is 00:30:27 that I want to emulate what happened in their life. And so I knew this from very early. So I had an athletic role model of baseball, of course. I grew up in the Bronx where the Yankees play. So there was a Yankee that was a role model. I didn't want to be him. I just said, if I played baseball, I wanted to play baseball as well as he did. All right? That's all. And I visited my local planetarium, the Hayden Planetarium. Took extra classes there.
Starting point is 00:30:53 They had educators and scientists. And they had educators that had such a way with words and storytelling. I said, if I'm ever an educator, that's the kind of educator I want to be. And the scientists had such command and they just knew so much. And the weird thing is, I remember thinking, I will never know this much about the universe as much as they do. Meanwhile, they're twice my age at least. And I'm 15 when I'm having these thoughts. Okay. And not realizing that when you get a PhD, you spend a whole five, six years studying that one subject. All right. And six years before that, I was nine.
Starting point is 00:31:33 So I had to get the time scales understood within me. So for me, a role model at its best is assembled a la carte from different people. And that's why if you're visible, you should always in the back of your head, ask yourself, am I being something that someone else might want to emulate? Because that chance is very real, whether or not you ever meet the person. And so when I see little children coming through the Hayden Planetarium as part of the American Museum of Natural History, there's school groups that come through, many international tourists, but also camp groups. I see little kids and they're looking bright-eyed and all I've committed is that whatever I help create there will have the impact on that next generation the way the educators and scientists had an impact on me.
Starting point is 00:32:33 Then I've given back to this world. And then when we talk about how important it is to bring people into science and technology, a lot of that, I think, comes from being able to inspire them and their creative minds. Yeah. There are so many structural barriers for a lot of people. And it's really easy, I think, for folks to get confused about how much luck and good fortune is involved in all of us who have the privilege of having interesting careers where we get to have a little slice of impact on an industry. How much luck plays a role in that for us? And so, you know, William Adams, who worked for me for a while in Okto, he was one of the first people I hired when I came to work for Microsoft, is just an incredible human being, like a source of inspiration for me personally, and the extent to which he has dedicated his life, not just to being a better engineer, but as a Black engineer trying to figure out how he goes and addresses some of those structural impediments to having more folks who are historically
Starting point is 00:34:03 underrepresented in this field be able to participate. It's just really, really inspiring seeing what Will has chosen to do with his life. Yeah, well, let me put on my speed mouth then. The general thing is, and this started back with, uh, when George Floyd was killed. Um, I kind of really sat and thought it's like, okay, what am I going to do? I mean, this is not the first black man who's ever been killed by police. He's not going to be the last, but this is a good moment because it was caught on 4k video. People are kind of paying attention. What am I going to do? Right. And I, one thought was, well, I've got money.
Starting point is 00:34:45 I can just kind of throw money out to people, you know, and be that kind of philanthropist. Like, nah, once your money's gone, that, that effect is gone. Well, I have 40 plus years in tech and I know that tech billionaires are kind of the top of the world right now. So that means there's more money in tech. Why don't I help get more people into tech? Right. So then that led to, well, there's lots of different models for that as well. I could be an LP in someone else's thing or whatever. There's lots of different ways of doing it.
Starting point is 00:35:24 And I just came up with a model. I was like, well, what matches my skills and experience and network best? So I came up with the studio model, right, Adventure Studio. And the difference between Adventure Studio and a plain old VC or an accelerator is I tell people, think of Motown, right? I'm the Barry Gordy, right? I'm going to find the talent. I'm going to train the talent.
Starting point is 00:35:47 I'm going to find Aretha Franklin. I'm going to find Michael Jackson. I'm going to help show them how to dance, stage presence, give them the record contract, put them out in the network and help them build their thing, right? Do that with software. So it's about creating software, creating a network, having a finance network,
Starting point is 00:36:10 having resources as simple as, as soon as you start your business, you need a tax accountant or else you're just going to fall behind on your taxes and you're going to be out of business in a year. You need an admin, you know? So you need three or four programmers. You're not going to write the code yourself. And when you fail, you need someone to say, that's all right.
Starting point is 00:36:32 Let's do it again. Right. And this is something that typically black and women businesses don't have. Right. They struggle to do it once they get burned. They're working at McDonald's or they're back to general population, they don't try again because they got obligations. So the studio is intended to put a little cushion that they don't normally have in the society such that they can have enough longevity to succeed, right? So I've been doing a lot of network building. I've been writing tons of code because the other thing that I'm telling people is, okay, AI is the thing. Two years ago, I would have told you, go out and learn some C-sharp, you know, and web development. Now it's
Starting point is 00:37:21 like, no, no, no. First, you need to go play with ChatGPT. And if you're going to write any code at all, you're going to use Copilot, right? And I'm not just saying that because I'm shilling for Microsoft. It's like, I do it myself and it makes me 30% more productive, at least. So I turned that and the software I'm writing is about saying, how can I write code such that it's going to be more composable when I use AI to do it, right? So for example, there's zip, the zip file format, right? Compressed files. There's a library for that, and it has its quirks. I wrote one myself. I have my own zip decoder, and the interface is so simple that when I want to say, now compress that and stick it in a zip, it's one function, three parameters, done. Right? Yep.
Starting point is 00:38:14 Clear licensing. I don't have to worry about all the licensing all over the place. And that's a difficulty when you start stitching with AI. It's like, well, wait a minute. Where did that data come from? Where did that code come from? Do you have the right license for that chunk of code right there? So I'm creating a substrate of code that's like, this is clear licensed, right? And here's how you program leveraging AI, right? This is how we have to do it. This is how we're going to 10x our abilities, right, is by leveraging AI. You can't just go and lock yourself in a room and think you're going to hammer out some code. By the time you're done, someone else who's got AI has already done it 10 times over. So you better learn how to do it with AI, right? So this is what I'm currently engaged in. Yeah, honestly, William's work, what he's doing is so inspiring, as you say, and what he's accomplished in his time at Microsoft and now after Microsoft is so impressive.
Starting point is 00:39:12 And I I loved that conversation between the two of you. And and it's great that we have people like him who are doing this important work and really working to inspire the next generation. Yeah, for sure. I mean, maybe the theme of this podcast, like the people that we talk to, are folks who are just deeply passionate about something and who are fully committed to pursuing that passion. And it's just a great pleasure to be able to chat with so many people who have that mindset. And one of those conversations that we had this year was with my very favorite science fiction author right now, Adrian Tchaikovsky, who has written just really incredible, hard science fiction books.
Starting point is 00:40:02 He's got a super interesting background as many science fiction authors do. Like, he is a zoologist and lawyer, you know, by training. And, you know, I think he wrote, maybe inadvertently, like one of the most important books on contemporary artificial intelligence, which was the third installment of a trilogy that he wrote called Children of Memory, where, of all things, pairs of corvids, intelligent birds, are maybe the best metaphor that I have seen for large language models. It's just an incredible thing that came from his imagination before large language models
Starting point is 00:40:54 really were doing some of the things that they could do. It was just tremendous to be able to spend some time talking with Adrian, not just about his craft, but about what inspires him to write the things that he does, and what some of his takes are as a creative person, like a writer, about what this AI moment means for him and people practicing his craft. It's just a great conversation
Starting point is 00:41:24 and wanted to give people a chance to listen to a little snippet of that conversation in this, in this year in review, we're doing here. I mean, when I'm working from the point of view of a non-human entity, whether it's a sort of an uplifted animal or an alien or something like that, it's generally the start point is the input so you look at what the senses it has how it experiences the world around it and that then that gives you a very good filter through which to look at the world and interpret the things that are going on around this this particular character it gives it its own very sort of set of priorities that can be quite different to human because we're very, very limited to our senses. I mean, our entire
Starting point is 00:42:12 picture of what goes on around us is fed to us through the various ways that we sense our environment. And when those ways go wrong or when those ways are given, giving us uncertain, uncertain information, you can get some sort of profoundly weird and dysfunctional, dysfunctional worldviews generating from that, which can be extremely hard to dissuade someone from even if even if you were even if you knew that you you had a problem which meant that you you know you were seeing things you were hallucinating the that doesn't necessarily mean that the hallucinations aren't themselves still incredibly powerful i mean it's very it's very hard not to react to a thing that you're either telling you is there yeah it's i mean sleep paralysis is a perfect example of that because most people the spiders who are the you know become the protagonist of the story uh and yeah i don't want to give too much away about the book for folks who haven't read it um but you you basically develop this very complicated society of intelligent spiders, which is a really interesting premise.
Starting point is 00:43:51 And part of what makes them so compelling is their sensorium is very different from humans. You want to talk about that a little bit yeah so i mean i mean weirdly enough they are they're also still considerably more human than um the either of the two major non-human and non-human points of view presented in the next book yep uh because they're still very visual creatures they're land creatures and there is a certain kind of factor in their evolution which is going to make them more human than they might otherwise be but at the same time um so spiders have a whole suite of senses that we can't really easily imagine they have um the ability to sense um sort of sense and chemicals and in a way that we can't they have ability especially to to send vibration in the way that we can't so they live
Starting point is 00:44:43 in a world that is constantly informing them of things that we would be completely oblivious to. And so you, as this, and the way I go about this is, I kind of work in sort of organic stages. Well, you know, if they're like this, then what would their, what would their sort of early development be? And then just building on each other state each day to make this more and more complex um society as they go as the book takes them through
Starting point is 00:45:10 time and you get a society which has a lot of um for example technology that builds on their own strengths they can do a lot of things that we can't quite early on purely because they have a lot of tools um even down to just being able to spin web which gives you the ability to make say watertight containers very very early on in your society which is a major you're making things like clay pots and so forth is a major step forward for human society and it's much harder for us because we have to use fire and we have to make them where the spiders can just literally produce them from their own bodies. And so little things like that then go on to have enormous implications to how their society develops.
Starting point is 00:45:51 And it also affects the way that they conceptualize of less kind of physical things. So you have one point where there's an effort to try and communicate a picture from one culture to another and they run into the basic problem that when a human is is coding a picture in a sort of a mathematical form we start at the top left or where you know one one of the corners depend you know possibly culturally dependent and we work um that work through the rows. These spiders start in the middle and spiral outward. That makes perfect sense to them because how can you necessarily know how big the picture is going to be when you start it?
Starting point is 00:46:32 And so you just start and keep working until you've got all the picture. But it means that a lot of basic ideas, even when they have a means of communication, become very hard to communicate because the way you're thinking about them is very different. Yeah, I loved hearing Adrian talk about, you know, how he's thinking about creating these species and civilizations. I still fundamentally consider myself a writer first and foremost. And so I love hearing about the creative process that other writers, writers more talented than me like him can do. And how also kind of thinking the broader terms about how, you know, sensory input and, you know, can shape kind of everything and how we operate and kind of the influence of, you know, art can
Starting point is 00:47:17 have on science and vice versa. Yeah. Another person who's thinking about some of this stuff in a really significant way, was on the podcast this year. And one of our most engaging conversations, the pioneer of emotion AI, Rana Al-Khaloubi. She believes, and I agree with her, that we're missing out on a huge amount of data that we could be using to train our AIs to make these systems more not human-like, but more relatable to humans. We had a super interesting conversation, which we're going to listen to right now.
Starting point is 00:48:02 Yeah, I mean, it's exactly what your friend is saying. Empathy and emotions are at the center of how we connect and communicate as humans. And our emotional experiences drive our decision making, whether it's a big decision or a small decision. It drives our learning, our memory. It drives this human connection. It's how we build trust. It's everything. But if you look at how we think about technology, it's often what I call the IQ of the device. Take chat GPT, right? Very smart, very intelligent. But it's all very focused on the IQ, the cognitive skills. And I felt like it was out of balance with our emotional and social intelligence skills. And I wanted to marry the IQ and the EQ in our machines, basically. And so I had to go back. I'm a computer scientist by background, but I had to like study the science of emotions and how do we express emotions as
Starting point is 00:48:55 human beings. And it turns out 93% of how we communicate is nonverbal. It's a combination of our facial expressions, you know, our hand gestures, our vocal intonations, like how much energy is in our voice, all of these things. And only 7% is in the actual choice of words we use. So I feel like there's been a lot of emphasis on the 7%, but the 93% has been lost in cyberspace. So I'm kind of reclaiming that 93% using computer vision and machine learning technologies that weren't available, you know, 20 years ago, but now they're ubiquitous. all of my social media consumption because you effectively have these machine learning systems, like particularly for businesses where their business model is engagement. So like the more you engage with the platform, the more ads run and the more money that you make. It is very easy to
Starting point is 00:50:00 like get systems that get the engagement by triggering your amygdala and like keep you in this. And it's very easy to be triggered by email. Like I all the time have email conversations with colleagues where like I get super agitated by the email conversation. And if I just jump into a video call with them, even like not even face to face, but, you know, what we're doing right now, like in seconds, like all of the stuff that I was agitated about goes away. So I'm just super intrigued by what you're doing. Like, how do we actually get this rolled out more broadly? Because I think you're absolutely right. Like we've focused so much on the text and text is so rife with opportunity to get us emotionally activated in the wrong way.
Starting point is 00:50:54 Way, right. Because there's a lot of confusion and ambiguity where you can clarify that when you see the face or hear the voice. I mean, I think what's really cool about this and what ended up being both the opportunity, but also the biggest challenge for Affectiva, you know, when we spun out of MIT, was that there are so many applications of this technology. And so we tried to focus, but it was always this challenge of like, oh my God, like there are so many cool applications. And so some that I think are really powerful, one is the automotive industry where we ended up selling to SmartEye and they're very focused on
Starting point is 00:51:28 bringing driver monitoring solutions to the world. And so this idea of understanding driver distraction and if you're texting while driving, well, we can tell that using computer vision, right? Look at your eye, your head movement, and if you have a phone in your hand, drowsiness, intoxication even. We've started doing a lot of research to detect that using cameras, optical sensors. So automotive is one area. We can do advanced safety solutions. We can look at, like, is there a child seat in the car and an infant in it?
Starting point is 00:52:03 And often, not often, but about 50 kids or so get forgotten in the car every year and they die of heat. So that's very fixable. We can fix that. Mental health is another space that I'm really fascinated by. We know that there are facial and vocal biomarkers of mental health disease, depression, anxiety, stress, even Parkinson's. So imagine if we, every time we hop in front of our computer with people's opt-in, of course, we can kind of articulate your baseline using machine learning. We know your baseline. And then if you start deviating from it, we can flag that to you or a loved one or a psychiatrist. So I think there's a lot of opportunity, but we're missing the scale. That actually kind of makes me think about how connected
Starting point is 00:52:53 to creativity so many of our guests have been as we've kind of been going through this year in review. And so let's share a clip now from Will.i.am. He talked about technology and how it changed the music industry in a super, super interesting way. Yeah. So imagine this is 1970 and you're a drummer from a band that's pretty popular. And you're a drummer that's coming up and you have aspirations as a drummer. And then there was a drum machine, but you weren't really threatened by the drum machine because it sounded like, didn't really sound like drums. And then the 80 a drum machine, but you weren't really threatened by the drum machine because it sounded like... It didn't really sound like drums.
Starting point is 00:53:28 And then the 80s come around and the LIN-900 started sounding a little bit more realistic, but it was stiff and robotic. And then Prince really made some pretty awesome songs with the LIN-900. and then the Akai MPC-60 drum machine came and now you could sample live drums and put the swing on it and it kind of sounds realistic. And then 2000 comes around and you're like, yo, fuck these drum machines, bro.
Starting point is 00:53:58 All the songs on the radio are fucking drum machines. And the 70s was all human beings. Tight. You gotta be super precise. In the 70s, it was all human beings. Tight. You had to be super precise. And the drum machine ate up a lot of the freaking drum time for drummers on the radio. And since then, live drums on the radio and on streaming, you don't hear no live drums anywhere on the radio. But what happened was
Starting point is 00:54:24 the drummer, a lot of them, became the producer. Because they were like, you know what, if this is the case, I'm going to learn this machine and I'm going to produce. And the role of the drummer, they made more money. Because the drummer never got publishing anyways. Because what's the publishing on drums? So the drummer in the band always got the shitty end of the stick when it came to ownership of the song because how what part did you write i wrote this everybody says boom boom no i said boom yep everybody says boom
Starting point is 00:55:01 so when it came to the the how participated, the drummer was always last. And that sucks. But the drum machine and producing on computers really empowered the drummer. The same is for music, the whole entire package now. It's just drums. It's guitars, it's just drums. It's guitars, it's bass lines, it's freaking chord progressions, ensembles,
Starting point is 00:55:30 like orchestral. Everything of music is now going through what the drummer went through in the 70s to the 2000s. Now the guy with the idea, the girl with the idea, the person with the idea, now they don't need the whole entire studio.
Starting point is 00:55:47 They don't need a whole entire band. They just need their idea. And the machine will supercharge them the way the drum machine and the DAW supercharge the drummer. That's the optimistic point of it all. Then there's like a bunch of negative stuff uh that i don't even want to entertain because i think we what we should be doing with ai is not just to augment yesterday yes it's going to uh you know render how we used to do things kind of obsolete or create new ways of doing things that undermine like how we used to get paid.
Starting point is 00:56:29 There's going to be new ways that we get paid. A song is going to be a lot deeper right now. But MTV said, hey, now don't just write the song. Do a video. That video used to be just promotion. If you wanted to go and sing and, you know, you're you're from LA, but you want to tour all of America. Other locations were like, well, send me, send me like a part of what a record contract was to be. And a record contract was based on the limitation of lacquer. And even though we had technology like mini discs and CDs, there were no more limitations.
Starting point is 00:57:22 Music is still composed as if there were limited space on a disc. A song is 3 minutes and 33 seconds. An album is 12 songs on it. Just because of the RPMs, 33 that went around the record and the tempos of every pop song was limited. So even with, you know, with this advanced technology that we have,
Starting point is 00:57:43 it's still basing it on limited. So that means you have to reimagine what a song is, just like they reimagined what a song was from the 1800s, when it was just theater and opera. When it came to the recording industry, they're like, okay, okay, we got to change a different song format here. You know, Muddy Waters, that song is too long. I need you to shorten it. Because when they would sing, the blues, they would sing,
Starting point is 00:58:04 there was no time limit. They would just go to a bar. You hear somebody sing and they sung for fucking hours. A song is like a reduction of the song sequence because of a limitation of the technology. So that means with AI and AI music, somebody has to reimagine what the fuck a song is. It's a discussion. It's a place for you to put your memories, a place for the song to alter based on your mood. Like, I got a feeling it's not just for what's the version of I got a feeling when you're pissed.
Starting point is 00:58:39 I just wrote the one that you're when you're optimistic. Where's the love is like when something goes wrong. What about when something is right? And in their songs like Boom Boom Pow, they're about nothing. Hey, what's Boom Boom Pow about? I don't know. Nothing. It's describing the song. This song is promoting the song that I wrote. What song? This one right here that I'm singing. Which is a crazy concept of songwriting.
Starting point is 00:59:11 Like, what are you going to write about? I don't really know what I'm going to write about this song I'm writing. But you have to change AI's opportunity to reimagine what a song is. To reimagine what it means in your life, to reimagine what it means to the world where it's discussion-based. We need to inspire the creative community to use AI not just for entertainment, but to reimagine the world. Tomorrow's industries are what? Tomorrow's jobs are what? If AI is going to replace jobs,
Starting point is 00:59:45 it's going to create new jobs. So shouldn't the creative community also be tasked to imagine what tomorrow is with it? Or are we just going to freaking pretend that we're going to make songs forever? No, that ain't the case. You ain't going to be making songs forever for business like we used to. You're going to be making experiences with it. You're going to be scoring moments in different ways of scoring moments, but it isn't going to look anything like yesterday. No. All right. I really am seeing a theme here, kind of seeing the intersection of creativity and innovation and technology and how all of these things influence one another.
Starting point is 01:00:24 Yeah. Even Toby Lutka, who is one of my absolute favorite engineers, in addition to being the incredible entrepreneur and founder of Shopify, and I were talking about some of these ideas about the change that's coming and how change influences the nature of craft and creativity. We had a really fascinating conversation where we reflected on this, which you all are going to get to hear a little bit of right now. The core of a craft is like conversion something into something, right? Like every craft that exists is sort of like that particular process. It's like people who convert wood into furniture are carpenters. And one thing which is really inspired about these things
Starting point is 01:01:08 is that they really is the identity of a carpenter that they are carpenter. Their identity is actually that they are a craftsperson and that the particular craft that they've pursued and they've become a meister in, it's usually just their aptitude. But it's a very common type of person, which is also really, really useful,
Starting point is 01:01:33 because therefore some crafts exist for a moment and then they don't exist anymore and they're no longer needed. Like the blacksmiths kind of figured out what to do next. And so I think all of this is actually really worth sort of understanding. And because, you know, like I do sometimes talk to, you know, junior engineers
Starting point is 01:01:53 who describe themselves as, I'm a React front-end developer. And I'm like, wow, you just put two little bits into your identity, but you really don't need to because I think you are actually much more than that, because you can solve problems extremely well.
Starting point is 01:02:10 The rallying cry is like, people were able to make the things that they really want with what they got. That's sort of at the core of it, and I love that. So our last guest of the year is really looking towards the future. David Kirtley and his company Helion are working to build the world's first nuclear fusion power plant, which as the words come out of my mouth, sounds incredibly audacious. But if you really think about the biggest challenges facing us as a species, having access to not just a clean energy supply, but abundance of energy, like where we can have way more power than we have now available to us to help us to, you know, fulfill and achieve our biggest ambitions is super, super important. I completely agree. And I have to say, audacious or not,
Starting point is 01:03:10 and I think audacious is a great word for it, listening to you and David talk got me so excited about what the potential in the future could be if these things can come to fruition. So let's close out now with your conversation with David. But I think the thing that people really miss, and you probably have a better perspective on this than I do, is that starting somewhere in the 1970s, we just stopped using energy at the rate that we had been using it before. And, and like, it's actually a staggering thing to think about, you know, not not just like, what happens if we could take the energy that we're consuming right now, and it's like more sustainable, but like, what happens if like energy becomes sustainable and cheap and abundant enough where you could use 100 times more 1000 times more 10,000 times more of it than you're using right now. What's then possible? So like the talk, talk a little bit about like why, why energy matters. Yeah. I think about this a lot actually. And from two perspectives, one is just recently we're starting to use more electricity. We're upticking for the first time in a long time, where in the 1970s and 80s, we plateaued in a lot of ways in terms of our energy use.
Starting point is 01:04:33 I think that's totally right. But recently, electric transportation, probably computation and AI kicks into that, too. and then looking at ways to solve climate change by spending electricity, then goes in and it's starting to increase our demand for electricity. The other thing I tie to in this is standard of living directly ties with access to electricity, low cost electricity. So you look at different parts of the world and standard of living, and you can say like, okay, great. They have more electricity access.
Starting point is 01:05:03 They have more standard of living, however you want to define that define that but i asked the same question what happens if we had 10 times does that mean our standard of living would be 10 times what does that mean would we have access to cleaner access to clean water desalination is the classic there's a there's a trigger at the one to two cent per kilowatt hour where if you can have electricity at that cost, then now you can desalinate water through electrolysis and other methods directly. Clean water is now cheaper than it was to actually like pull it out of a river and purify it. And so suddenly, you know, you enable some of those things which are clear. But I think through, you know, if you had, you know, the computational access where you can have large-scale servers at everybody's house, you got to get maybe the server price down. But now you can actually do really interesting things on the computation and the cooling around that. One of the things that maybe people don't appreciate or think about clearly enough is maybe everything good that has ever happened in the history of humankind is humans discovering new sources of energy and being able to put that energy to work, solving problems that benefit humans.
Starting point is 01:06:22 And so in a sense, like you actually want to be able to consume more energy, you don't want the consumption of energy to be a bad thing, because nominally, consuming more energy means you're doing more of those useful things for humanity. You know, like electrolysis, like, I mean, one of the things here in the state of California is like, we have parts of the state where you have abundant water, and you have parts of the state where you have abundant water and you have parts of the state where you have no water. And one of the reasons that California is habitable is we spend an enormous amount of energy pumping water from places where it's abundant to places where it's scarce. And I think you're going to have to do more of that in the future with climate change.
Starting point is 01:07:03 So you really do want a world where you have cheap, abundant energy, which is why I think the problem you're working on is, like, I think artificial intelligence is a pretty important problem. That's the thing I spend most of my time working on. But I think your problem is more important than my problem. And my problem is dependent on your problem. Yeah, at some scale, your problem, our two problems work together. Yeah, I don't know that we know, frankly, what happens if you
Starting point is 01:07:33 have more energy, more low cost electricity, particularly ones, it has to be low cost. That's really important. If it just costs a lot more, and you have more of it, it doesn't actually help help the situation where the essentially effective cost of burning wood to burning coal to vision power and then to renewables that are some of the renewables when you have access to good sunlight, solar power can be really low cost. You have these stage gates for humanity that you unlock. And so I don't know that we know the answer to that, but I'm excited to find out. That's for sure. That was a whirlwind tour of our incredible 2023
Starting point is 01:08:12 guests on Behind the Tech this year. You can check out those full episodes on your favorite podcast platform and on YouTube. So be sure to check those out. And if you have anything that you would like to share with us, you can email us anytime at behindthetech at microsoft.com. Thank you for tuning in and we will see you in 2024. See you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.