Microsoft Research Podcast - 113 - An interview with Microsoft President Brad Smith

Episode Date: April 1, 2020

Brad Smith is the President of Microsoft and leads a team of more than 1400 employees in 56 countries. He plays a key role in spearheading the company’s work on critical issues involving the interse...ction of technology and society. In his spare time, he’s also an author! We were fortunate to catch up with Brad who, late on a Friday afternoon, sat down with me in the booth to talk about his new book, Tools and Weapons: The Promise and the Peril of the Digital Age, and revealed the top ten tech policy issues he believes will shape our own century’s roaring 20s. He also gave us a peek inside the life of a person the New York Times has described a “de facto ambassador for the technology industry at large” – himself! https://www.microsoft.com/research  

Transcript
Discussion (0)
Starting point is 00:00:00 Fundamentally, what we are talking about is endowing machines with the power to make decisions that previously could only be made by humanity. And we have to ask ourselves, what kind of decisions do we want machines to make? If we have any aspiration of these decisions reflecting the best of humanity, we better focus on responsibility and all of the pieces of it. You're listening to the Microsoft Research Podcast, a show that brings you closer to the cutting edge of technology research and the scientists behind it. I'm your host, Gretchen Huizenga.
Starting point is 00:00:44 Brad Smith is the president of Microsoft and leads a team of more than 1,400 employees in 56 countries. He plays a key role in spearheading the company's work on critical issues involving the intersection of technology and society. In his spare time, he's also an author. We were fortunate to catch up with Brad, who, late on a Friday afternoon, sat down with me in the booth to talk about his new book, Tools and Weapons, The Promise and the Peril of the Digital Age, and revealed the top 10 tech policy issues he believes will shape our own century's roaring 20s. He also gave us a peek inside the life of a person the New York Times has described as a de facto ambassador for the technology industry at large, himself. That and much more on this episode of the Microsoft Research Podcast. Brad Smith, welcome to the podcast. Thank you. Nice to be here.
Starting point is 00:01:41 You're an unusual guest for us in the booth. As president of Microsoft, you oversee a lot of stuff and you wear a lot of hats. So let's kick things off by talking about what gets Brad Smith up in the morning. What does a day in the life of the president of Microsoft look like? I think what gets me up, frankly, is the opportunity to sit down and work hand in hand, or at least arm in arm, with researchers, with engineers, with people focused on computer science and data, and what it all means for the world. Because that's really, in many ways, my job. It's the intersection, if you will, between engineering and the impact of data and technology on the world today, the issues, the challenges that all this creates.
Starting point is 00:02:25 I, you know, spend a lot of time representing Microsoft externally. I spend a lot of time working on our big initiatives internally. I like to say that if there's an intersection, and there is, between engineering and the liberal arts, I'm the liberal arts side of the intersection, but I'm right smack in the middle of it every day. I want to go there for a second because we're looking at universities around the country that have been responding to the uptick in STEM majors and the downtick in humanities majors. And they're responding financially. They're closing some departments and they're consolidating some. Speak for a second about the importance of the liberal arts and humanities road coming into this intersection. I think the thing that people are missing today is that more than ever, technology is a multidisciplinary sport.
Starting point is 00:03:19 This is an industry that was largely built by engineers, researchers, and developers and the like, and I grew up in it. I've been at Microsoft for more than 26 years. But if you look at where technology is going, I think everyone who majors in computer science or data science needs to take a dose of other courses in the liberal arts. I think everybody who studies in the liberal arts absolutely needs some exposure to computer science, to data science, to statistics and the like. But what we really need to recognize is the teams that are going to do the best work, who are going to solve the world's greatest problems using technology, are almost always going to be multidisciplinary teams, people who've come from different functions and different backgrounds. Well, a big chunk of what we're going to talk about today is on the topic of artificial intelligence or AI, and we have a lot of ground to cover. But before we get into
Starting point is 00:04:15 the weeds, I want to start at a higher level and look at AI through the lens of responsibility. I think we all realize the power of AI, and many have begun to talk about things like ethical AI and trusted AI, but you've chosen the word responsible. Why? I think it's important to have a word that encompasses more of what we're really talking about. Ethics play a fundamentally important role. There are things that I think go beyond ethics to some degree that are grounded in the rule of law, in the recognition of human rights, an element of societal responsibility. Fundamentally, what we are talking about is endowing machines with the power to make decisions that previously could only be made by humanity. And we have to ask ourselves, what kind of decisions do we want machines to make? If we have any aspiration of these decisions reflecting the best of humanity, we better focus on responsibility and all of the pieces of it.
Starting point is 00:05:19 Well, on that note, you and your colleague, Carol Ann Brown, who's Microsoft Senior Director of External Relations and Executive Communications, have a new book out called Tools and Weapons. Just the title is fantastic. And it's evocative of the idea that every new technology comes as a package deal. It's both a blessing and a curse. So tell us what inspired you to write this book at this time. I think two things inspired us to write it. One is the ubiquitous nature of digital technology in the world today.
Starting point is 00:05:52 It really has become the fabric of our lives, our homes, our communities, our societies. It is in some ways at the foundation of every opportunity to make progress. Technology is also part of every challenge that every community is facing. That really speaks to the tool and the weapon that technology has become. And we really felt that it was important to reach a broader audience to bring these issues to life. These issues are too important to be left to people who work in tech companies. By definition, they're affecting everyone. And I think it's to some degree incumbent upon us who are closer to it to help make the issues, the facts, if you will, more accessible to more people. In your work at Microsoft and in Tools and Weapons, you outline six core principles
Starting point is 00:06:43 that you suggest will guide us into this next decade, and they provide the underpinning of responsible AI, which we've just alluded to. So give us a brief overview of the principles and why they're important, but also how you see them playing out in what I'll call an AI 5G quantum computing cloud scale era. Well, first, we at Microsoft did develop and publish our six ethical principles in a way that's sort of remarkable to me. This was only two years ago that we did it. This was a joint effort of really people in Microsoft research led by Harry Shum and Eric Horvitz and people in the part of the company that I lead to work together. The six principles really cover, first, fairness or the avoidance of bias,
Starting point is 00:07:28 the need to protect privacy and security, the need to ensure that artificial intelligence is safe and reliable, the need to ensure that it's inclusive, I will say, for all people, and perhaps with a special eye towards the billion people on the planet who have some sort of disability. That adds up to four. Those four principles really sit on two others that are foundational for all of them. One is transparency, the notion that people can't understand or have confidence in the fulfillment of these principles unless there is a level of transparency. And then there is the principle that I think is the bedrock of them all, accountability. The notion that machines must remain accountable to people. The principle that the people who create this technology must remain accountable to society as a whole. That adds up to the six. And what I think is interesting in part is that this set of principles or other principles like them are really spreading around the world.
Starting point is 00:08:30 I think to some degree, Microsoft's principles influenced others. Certainly to some degree, other people's work influenced us. But mostly, and I think it's encouraging, people are tending to think in fairly similar ways and you see a consensus emerging more or less almost organically. That's encouraging. How do you think, how do you wrap your brain around the fact that while you and others can say these are the things we're aiming for, you've got all these other players and actors in the world that may or may not be as eager to follow those as you. Well, I think that really points to two very important dimensions. I'll just call it the state of responsible AI in the world today. The first is even those of us who embrace these principles have to recognize that being able to articulate them is not sufficient to operationalize them. And so the biggest challenge, whether you're talking about Microsoft or any institution in
Starting point is 00:09:31 the world today, is really to figure out how to take its commitment to principles and turn them into something that is real every day. And that requires going from principles to policies. You need to implement these policies in a series of standards, things like research or development guidelines. You need to put in place training programs for employees. You have to have the capability to measure and monitor whether they're being pursued. You need compliance systems. You need to build all of that.
Starting point is 00:10:00 And we need to do it in a case like Microsoft, literally at a global scale. And I don't think anyone should underestimate just the magnitude of that challenge. And then, by the way, you have the second challenge. What do you do about people who say, that's very nice, I don't care? I'm not going to be principled, or I'm not going to sign on for that principle. I'm going to use artificial intelligence in ways that are going to do societal damage. And I think this is where public policy and the law kicks in. Ultimately, the only way to ensure that everyone is ethical or is accountable for some ethical standard is to take the ethical principles that we want to apply universally and enact them into law. Every year, you and your team, while we're on the topic of lists, identify 10 top tech issues that you predict will be important for the coming year. And when it's a beautiful year like 2020,
Starting point is 00:10:59 for the coming decade, as you've said in your book, Tools and Weapons, technical innovation isn't going to slow down, so the pace of the work around it has to speed up. Give us an overview of the list you've got this year for the decade of our own roaring 20s, as it were, and your thoughts on how people doing the technical work, as well as the people doing the other work, might help address them and do so at the speed of technology. We really found it helpful to create our top 10 list this year. This is something that Carol Ann Brown and I have done for a few years in a row. And having then written the book and been out talking with people about the book,
Starting point is 00:11:44 we took the conversations and frankly, everything we were hearing from other people, took a step back and said, well, it's the 2020s. Let's just not focus on 10 issues for the year. Let's think about 10 issues for the decade. And they tended to fall into, I would say, four buckets. The first, an issue all of its own, but a bucket completely on its own, is sustainability. Just because we see climate as such an important issue and it's going to reshape everything, including technology. Second, we have issues of fundamental importance around trust, around privacy, security, digital safety, responsible AI. Third, we see
Starting point is 00:12:20 huge issues around geopolitics, whether it's the relationship between the United States and China or the focus on digital sovereignty, especially in countries in Europe. And finally, there's really the role of technology in inequality. We talk a lot about income inequality. You see technology playing into that, especially in the context of internet inequality. Some people have broadband, some don't. Skills or educational inequality, especially access to digital skills. Housing inequality in cities like Seattle or San Francisco, where the tech sector is fueling a rise in housing prices. So when you take, you know, the future of the planet, our ability to trust technology, the geopolitics of technology, and technology-fueled inequality, it's going to be quite a decade. The roaring
Starting point is 00:13:15 20s may be pretty roaring, I think, is one way to think about it. You know, you're a lawyer, and the thing that seems to be lagging the most in my mind, and I may not be alone, is that the law hasn't caught up to technology. What kinds of things are happening in the sort of political and legal structures around, we've seen GDPR in Europe and some of the other sort of thinking forward, what's happening elsewise in this arena? Well, the basic thesis of our book is that tech companies need to step up and do more and governments need to start moving faster. We are starting to see governments move faster, probably first and foremost in, I'll say, Brussels and Beijing.
Starting point is 00:14:16 Those are the two places where regulation tends to move the fastest. We're seeing it in other places. I think it'll be fascinating to see what unfolds in London now that the United Kingdom is really its own regulatory power, if you will. We will see more momentum in Washington, D.C. Already we're seeing it at the state level in the United States. We're seeing California be a leader in the United States around privacy. So I think it's very clear that by the end of this decade, technology is going to be more regulated than it is today. And that will be good, and that will create challenges for all of us who work with it.
Starting point is 00:14:51 Well, and the fact that it has to, I mean, you've got things that people would say, we don't even know what to do with this in a court, right? One of the points we've made is that in so many respects, digital technology has gone unregulated for probably a longer period of time than any important technology in the period of time, say, since the 1850s. Right. Compared to the automobile or airplanes, for example, everything that resulted from the combustion engine. We saw more regulation. Or just think about the world in which we live. Foods, drugs, you know, cars today, they're all regulated by health and safety standards, and yet digital technology is not. And yeah, I think it's overdue. You know, it doesn't mean that regulators should be thoughtless
Starting point is 00:15:43 or uninformed or fail to think about balance, but we do need a regulatory floor, and I think it's right to recognize that. Right. And even the things you mention, these are all things that have imminent harm potential if something goes wrong. And I think we're just starting to figure out that there's potential imminent harms with these technologies. I think that is true. And I think that by 2030, in so many ways, an automobile is going to be a computer on wheels. An airplane is going to be a computer with wings. But fundamentally, computers, digital technology, AI will raise many of these issues, even if they're in a box that's standing still. Well, one of the biggest fears that people have about AI, aside from sensational predictions in the popular press, is a grouping of topics that you've mentioned, privacy, safety, and
Starting point is 00:16:33 security in an AI world. We've talked a bit about the what of these concerns, but I want you to talk a little bit about the what now. Well, I think the first question for anybody who works in the technology field as a researcher, a developer, or designer is actually to think hard about what these issues mean for the products that people want to create. What does it mean to have privacy by design, to have digital safety by design, to have responsible AI by design, to have cybersecurity by design. All of these are design fields that have started to really take off,
Starting point is 00:17:09 and in many respects, they're maturing rapidly. In many respects, I think those of us who are connected with the creation or the research advances in the technology are absolutely in the best position to bring innovation to the protection of people that will be essential. And then if you look beyond that, all of us are users of technology. We're all consumers. Increasingly, there are many features in popular products, consumer products, business services, and the like that do protect privacy. Certainly they protect security. And the question is whether as consumers we want to use them. And for all of us who care about these causes, I think there is some real
Starting point is 00:17:51 benefit to using them and frankly helping to give a boost for the kind of usage that will help drive improvements. Right. Interestingly, and I've had some other researchers in the booth who've talked about these privacy and security and safety issues, a lot of technology is binary. You either want to use the app and so you agree to everything, or you say no and sorry you can't use the app. So is there any move towards controls on the part of consumers and users in technology to say, hey, it's not just binary, you can have this about me, but you can't have that? I think the answer is no and yes. No, I mean, some services are binary, but increasingly, you look at an app on a phone and you think about something like the location service. There's three choices. You can never use the location service. You can always have the location
Starting point is 00:18:42 service on even when the app itself is not running, or you can say the location service can locate me, but only when I'm using the app. And the first thing I would say is if you want to protect your privacy, you can go to that middle level and only have the location service know where you are when you actually want the app to do something for you. But I would then actually step back and look much more broadly. There's a lot to what you say in suggesting that we don't have as much choice as consumers as we might like. So what do we do? I've had vibrant debates in Silicon Valley where some in the tech sector have said, look, the fact that people are not turning away from this app or another means that people fundamentally don't care about privacy. I believe they do care, but people want to continue to use these services.
Starting point is 00:19:33 And where you see them manifesting their opinion is actually the public opinion that is increasingly shaping the views of government officials. The fact that California passed a sweeping privacy law after it had enough signatures to go on the ballot, after the polling showed it would be passed overwhelmingly, I believe says people do care, they want to have their privacy protected, and they want to be able to use the service. They want both. In your book and elsewhere, you also talk about the positive things that we're seeing as a result of advances in technology. And one of the best things about AI is its ability to democratize and improve areas like medicine and accessibility and the environment. So just in 2020 so far, it's been a busy January for you, Brad. You've led two big announcements for the company. One is on Microsoft's carbon negative by 2030 initiative. Yes.
Starting point is 00:20:29 To say it right. And another is the launch of AI for Health with Peter Lee from Microsoft Research here. Both are part of your AI for Good program. So tell us a little bit more about these announcements and why they're important to Microsoft's larger mission in developing technology. They were both really important and, in my view, exciting steps for Microsoft to take. Our carbon announcement, I think, is not just important to Microsoft. I hope it is something that can be part of an ongoing broader movement that we're clearly seeing every day, that is sweeping around the world,
Starting point is 00:21:06 moving across the business community, and really mobilizing companies to do more to address carbon and climate issues. It took a huge amount of work to bring together every part of Microsoft to really make that announcement possible. And it took a lot of iteration to sort of get to a point where we could have the ambition that was as high as I felt we needed, but also the rigor of a plan that would give us confidence that the goals could be met. It speaks powerfully to the role of digital technology in part, because we have these huge goals, as you mentioned, to be carbon negative by 2030, to in effect go back in time and remove by 2050 all the carbon that Microsoft has emitted since its founding in 1975. And part of this goes to the heart of more renewable energy for our data centers,
Starting point is 00:22:01 more efficiency for our data centers, a variety of other steps where digital technology, digital transformation will just be fundamental to not just Microsoft's own direct carbon reductions, but also across our supply chain, our value chain. So digital technology is, I think, a foundational tool for helping to address the world's climate needs. And at the same time that we hopefully have a planet that is habitable in the right kind of way, we can also spread better health for the human population. And this is where the AI for Health initiative that really Peter Lee and then John Cahan from the data science side have been at the heart of leading. And there are so many areas where it's now clear that data and artificial intelligence can help lead to breakthroughs.
Starting point is 00:22:55 Breakthroughs in helping us find cures for diseases, helping us understand the distribution of, if you will, health among different populations, helping us bring better health to broader populations. AI is, in a sense, at the heart of everything in the world today. So it makes sense that as we've been expanding our own AI for good efforts, we now have five pillars. We started with AI for Earth. We went to AI for accessibility, AI for humanitarian action, AI for cultural heritage, and now AI for health. It is exciting to see how many different problems AI can help us address. I think what it really points to, and I think it's an interesting aspect of all of this is, again, the multidisciplinary nature of technology. So much, I believe, of the cutting edge of research is not just within a field, but the AI for Earth
Starting point is 00:23:56 work is a great example of this. At Microsoft, we have a team that consists of computer scientists and data scientists and environmental scientists. And you can take the first two and add in a third discipline from a broad list of disciplines. And if you can get people working together, you can probably do only one in the AI game. It's at the forefront of every major tech company and, more importantly, the forefront of many nation states now. As president of this company, I'd like to know how you position Microsoft in this very large arena and how you view the company's role in the AI world. What's Microsoft's vision in terms of leadership in AI, both inside the company and outside? There are two things that come together that I think are critically important.
Starting point is 00:24:58 The first is Microsoft's grounding for all of us who work here in our mission. You know, it really is a mission to empower other people, other organizations all around the world to use technology, including AI, to achieve more. Now, what that means, put in that context, is a couple of key things. One is our mission really is universal. I mean, we're trying to create technology that people can use around the world to better themselves and their communities. One of the things that means is that we want to democratize technology. We want to democratize access to it. I don't think that any of us should want a future where the secrets or the wealth of AI resides just in a couple of countries. Or companies. Or companies, absolutely. I think we should think of it more like electricity.
Starting point is 00:25:53 Electricity has spread around the world and a country benefited from it mostly based on how quickly it adopted it and spread it to its rural communities and the like. That's what we should want of AI. But there is a second dimension that is also to some degree at odds with the notion of providing this technology to anyone who wants it to do with it whatever they choose. It goes back to these principles. And I would argue that those principles are even implicit in our mission. You can't empower people if you can't protect them, if you can't keep them safe. So there are certain use cases that we won't allow for our technology. At times, it means there are certain countries where we won't be comfortable providing the full range of services. And this is a more
Starting point is 00:26:46 complicated world. It is in some ways vastly more complicated than the world of producing Microsoft Word and letting anybody use it, knowing that somebody would create a work that would get the Nobel Prize in literature and someone else would write something truly horrible. But we created the tool, and we were not responsible for whether somebody turned it into a weapon, if you will, because we couldn't control that. But in a world where AI runs as a service in a data center from the cloud, you can impose more controls. And I think that's one of the reasons that governments and the public is expecting more of tech companies. They expect us to do more because we can and should. So along those lines, you've said that Microsoft isn't planning to deliver AI in a big box,
Starting point is 00:27:39 but rather deliver the building blocks of AI so anyone can build AI systems. Obviously, with some caveats there, since we're sitting here in the heart of Microsoft research, I want to get your take on what those building blocks are and the role of research in delivering them. Well, I think it's a really great question, and I see it not just at a place like Microsoft for search, but I've also served as a trustee at Princeton University for a number of years. And I would say two things. One is you see in computer science departments or you see in other departments that are really at the foundation for data science, certain ongoing opportunities for advances at the basic research level. And these are, in many ways, fields that people here at MSR and elsewhere have been heavily involved for not just years, but decades.
Starting point is 00:28:33 Things like computer vision, things like speech recognition, almost anything relating to machine learning. So you have a lot of these fields that are just moving forward very quickly. But at the same time, I think so much of the most important work is actually very multidisciplinary. Certainly at a place like Princeton, I have the opportunity to work and see some of the issues in the environmental field again, or microbiology. I see issues that we're working on, Microsoft and Princeton together, around so-called programmable biology. And I think that is such a defining part of the future. It's why I'm always excited about the fact that at Microsoft, we have a lot of people who have PhDs in computer science or data science, and we have a growing
Starting point is 00:29:26 number of people who have PhDs in other fields, and then we work to bring them together. And the same thing is happening at universities. Well, Brad, we've reached the part of the podcast where I always ask the guests to get real and answer what could possibly go wrong. A good part of your professional career has been dealing with things that go wrong in a court of law. And you're a veteran at the what keeps you up at night question. So as a leader of one of the most well-known tech companies on the planet, you have to consider every single day the potential downsides of every technology that your company is putting out there. So what keeps Brad Smith up at night? And how does he mobilize a company like Microsoft to
Starting point is 00:30:05 help him sleep better? I think fundamentally the thing that I worry about the most is the weaponization of the technology that we create. It can be weaponized in very specific scenarios, say something like facial recognition, to stop people from peacefully assembling in a city square. It can be weaponized because of the risks of bias by on a scale that has been well imagined. It was written about 70 years ago in the book, 1984, but now it can become a reality. I think the most natural thing for any creative company to do is to just keep creating more products and keep selling them to anyone who will buy them. And yet, if you want to be principled, if you want to do good, if you want to be responsible, you have to be able to say no. No, that is not something we
Starting point is 00:31:20 want to create. No, that is not something we want to sell for that particular use to that particular user. And it takes an enormous amount of discipline, self-discipline and business process to ensure that an organization, especially one operating at a global scale, will avoid falling into those traps. That's one of the things that keeps me up at night, wanting to make sure that we at this company don't fall prey to that kind of problem. You know, the researchers that answer this question can rarely go into those weeds. They're making the things.
Starting point is 00:32:02 A person like you can. Upstream is where the company and or the leadership decides how we're going to be as a company. One of the things that gives me great hope and encouragement is that I find that our employees do care about it and want us to do the right thing. And I've been so encouraged, even typically when I've run into an account team that might have been working for months to sell something and then they're told they can't, but they really do get it. But it does require that we all remember that we have to stay constantly focused on this. You can say you're principled, but if at the end of the day you'll
Starting point is 00:32:43 do every deal that can be done, then the only principle you're really uphold, but if at the end of the day you'll do every deal that can be done, then the only principle you're really upholding is a principle that you'll do every deal that can be done. And it ends up swallowing everything else. Brad, you have small-town Midwestern roots and a decidedly non-technical background. Give us a brief history of Brad Smith. How did your early life shape who and what you are today? And how did you gravitate from history to high tech? Well, I was really fortunate. I like to joke that I grew up in a middle-income family in the middle of the country with the last name Smith, the most common name in the middle of the phone book, almost literally. But out of all of that, I came out of Wisconsin, was really lucky to go to Princeton and, you know, work my way and get scholarships on my way through college.
Starting point is 00:33:36 And that was one of the places that introduced me to technology and technology policy issues. While I was a student, by my junior year, I had literally graduated from delivering newspapers in the morning and serving food in the cafeteria in the evening to having a job working for the university's director of government affairs. I was just a student assistant. It was nothing terribly grand, but the issues that we got to work on were fundamentally science and technology policy issues, things like federal support for basic research, things like the federal government's support for plasma physics fusion research, where Princeton did and still does have a national laboratory. So that really awakened my interest in this intersection between technology and policy.
Starting point is 00:34:25 And then a few years later, there was this new thing coming out on the market called a personal computer. And as somebody who was going through law school, somebody who had to do a lot of writing, I looked at this and I got quite excited, both because of sort of the technical technology gadget side. But also I looked at it and said, oh, but I can write faster and better if I have this, and then play games as well. And it turned out that all that was true. So, how did you end up working for the company that makes personal computers? Well, in a sense, it all was a sort of continuous journey. I bought that first personal computer. My wife and I were both law students, loved it so much that then my first job after law school was working in the federal courthouse for a federal judge in Manhattan. And so I literally took the equivalent of 10% of my
Starting point is 00:35:18 annual salary and bought a new, improved personal computer, took it into the courthouse where there had not been and there were not PCs, and then applied for a job in the law firm in Washington, D.C. And when I got the offer, I said I would only accept it if they would give me a PC on my desk. Happily, they said yes. It was such an unusual request for someone to make at that time that everybody in this large law firm of about 250 lawyers said, there's this weird kid on the eighth floor who seems to know something about computers. And so I had an opportunity arise to start to do legal work for Microsoft. I loved it so much. When they asked me to join the company in 1993, I said, yes, it was supposed to be a two-year leave of absence. I had just become a partner at the law firm.
Starting point is 00:36:09 And that was more than 26 years ago. And here you are now, president of the mothership. It's something. Right? Yes. Well, this has been fantastic, Brad. At the end of every podcast, I ask my guests to share some insight or wisdom with our listeners. And usually they're seasoned researchers at MSR speaking to some version of their grad school self.
Starting point is 00:36:30 But you're in a unique position to offer advice from a different perspective. So what would you say to our audience, many of whom are the very people who will shape the technology that will shape our world for the decades to come? I would say three things. One, always push the edge of the envelope without quite busting the entire door down because that's when you end up fraying relationships and finding it more difficult to get things done. But push the edge of the envelope.
Starting point is 00:37:00 Have confidence in yourself and take those creative ideas within you and pursue them. The second thing I would say is balance that with a sense of humility. I actually think that the great superpower that we have in the Nadella years here at Microsoft and something that I'm absolutely passionate about is what I'll call the power of humility. I like to joke across Microsoft, no one ever died of humility. But it really helps you stay curious. It helps you ask other people good questions. It encourages you to listen and not just talk
Starting point is 00:37:38 and stay focused on getting better. And finally, I would say, at the end of the day, it's great to be smart. It's great to be successful, but it's better to be honest, to have a sense of integrity. To me, the favorite story perhaps that Carol Ann and I tell in the book is one that involved me personally. And it was a story where we had stated publicly to our customers that we would sue the federal government if the government came asking for their data without, in this case, organizations being allowed to know. And when our litigators came and said we shouldn't pursue this case because we were likely
Starting point is 00:38:18 to lose and it was likely to be expensive and painful, I said, look, I'd rather be a loser than a liar. It's okay to lose. Everybody does sometimes, and then you bounce back. But if you lie, if you sacrifice your integrity, I do think you pay a price for that for a very long time. So be ambitious, be humble, be honest. It's a good recipe. It serves people well. I think that needs to be on a bumper sticker. I'll work on shortening it even more. Yeah. Brad Smith, thank you so much for joining us today.
Starting point is 00:38:55 It's been a real treat. Thank you. Thanks for having me. To learn more about the research behind the tools and the researchers who do it, visit Microsoft.com slash research.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.