Microsoft Research Podcast - 089 - Inside the Microsoft AI Residency Program with Dr. Brian Broll

Episode Date: September 11, 2019

In 2018, Microsoft launched the Microsoft AI Residency Program, a year-long, expanded research experience designed to give recent graduates in a variety of fields the opportunity to work alongside pro...minent researchers at MSR on cutting edge AI technologies to solve real-world problems. Dr. Brian Broll was one of them. A newly minted PhD in Computer Science from Vanderbilt University, Dr. Broll was among the inaugural cohort of AI residents who spent a year working on machine learning in game environments and is on the pod to talk about it! Today, Dr. Broll gives us an overview of the work he did and the experience he had as a Microsoft AI Resident, talks about his passion for making complex concepts easier and more accessible to novices and young learners, and tells us how growing up on a dairy farm in rural Minnesota helped prepare him for a life in computer science solving some of the toughest problems in AI.   https://www.microsoft.com/research

Transcript
Discussion (0)
Starting point is 00:00:01 My first project was focused on actually trying to get more human-like behavior in Minecraft, which was leveraging a scripted agent and very few human demonstrations. So given like 33 human demonstrations and an existing scripted agent, the question was, how can we incorporate some sort of learning into the agent? Not to necessarily make it perform better, but to make it more engaging and interesting and hopefully more human-like. You're listening to the Microsoft Research Podcast, a show that brings you closer to the cutting edge of technology research and the scientists behind it. I'm your host, Gretchen
Starting point is 00:00:34 Huizinga. In 2018, Microsoft launched the Microsoft AI Residency Program, a year-long expanded research experience designed to give recent graduates in a variety of fields the opportunity to work alongside prominent researchers at MSR on cutting-edge AI technologies to solve real-world problems. Dr. Brian Broll was one of them. A newly minted PhD in computer science from Vanderbilt University, Dr. Broll was among the inaugural cohort of AI residents who spent a year working on machine learning in game environments and is on the pod to talk about it.
Starting point is 00:01:13 Today, Dr. Broll gives us an overview of the work he did and the experience he had as a Microsoft AI resident, talks about his passion for making complex concepts easier and more accessible to novices and young learners, and tells us how growing up on a dairy farm in rural Minnesota helped prepare him for a life in computer science, solving some of the toughest problems in AI. That and much more on this episode of the Microsoft Research Podcast. Brian Broll, welcome to the podcast. Thanks, happy to be here.
Starting point is 00:01:53 So this one's going to be different. You're not actually in Microsoft Research proper. You're part of a really interesting new program. It's called Microsoft's AI Residency Program. And you're part of the very first cohort, right? So 2018, it started the first year. You're just finishing up? Before we begin our regularly scheduled programming, give us a bit of an overview about the residency program.
Starting point is 00:02:17 Tell us what it is, what's cool about it, and actually why you did it. Sure. So the residency program is a year-long fixed-term position where you come in and you work on two different six-month projects. And the applicants come from a variety of different backgrounds, a lot of them with a computer science background, but some even from like quantum physics and some people with more of an NLP background, some people with ethics and psychology and a cognitive psych. So there's a variety of different backgrounds and a variety of different degree levels. And people come in and get to work on all sorts of different projects for, again, two six-month periods.
Starting point is 00:02:48 Right. It's its first instantiation of a residency. When I hear that, I think of medical residencies, right? You are a doctor, but a different kind. Yeah, yeah. I don't usually operate. We said try not to. Only on data. Yes, exactly. Exactly. I mean, there are some things that make a lot of sense about doing a residency. I would suspect that one of the benefits is certainly that if they have people here working on research projects with a lot of people at MSR and making a lot of these really great connections,
Starting point is 00:03:15 and then they move to some other product team or stick around at Microsoft, or even if they return back to academia to, in some cases, pursue a PhD, in other cases come on directly as faculty. It certainly facilitates a lot of these relationships and collaborations, either within, again, product teams and MSR or potentially within other academic institutions and MSR. Well, at the beginning of every podcast, I ask my guests, what gets you up in the morning? You're at more the beginning of your career than most of the people that I have in the booth. So I'm going to tweak the question a bit and ask you what's on your research roadmap. What kinds of questions excite you? And if you could encapsulate it succinctly, what's the big goal of your research? Yeah, that's a good question. In the past, I've done things with like educational visual programming environments and trying to make more
Starting point is 00:04:03 advanced and complex topics more easily accessible by younger users or young learners in some cases. And I've also done some work in like related spaces around like developer tools. And again, just trying to see how you can take advanced concepts and make them something that people can not just like interact with at a high level, but hopefully have a deep and rich engagement in so that they can really leverage it for whatever sort of task or problem they really care about. So that's definitely a recurring theme. And I have found that when I work on different projects, a lot of times I end up coming back to this question, like even if I'm not trying to target some like end user or developer or make something accessible, I always find myself wondering,
Starting point is 00:04:43 first of all, how could I solve whatever problem I'm working on? And then one step back, how could I get to that information that I need to be able to figure out or solve whatever problem I'm working on? And then, how could I do this easier? So then I often find myself falling into the same train of thought, even when I'm the only person
Starting point is 00:04:59 doing it, of like, I really want to be confident that things are working the way that I think they are. The questions come in a variety of different forms, depending upon specifically what I'm working on. But it can vary from how can I really better understand this problem so I can figure out a solution to like, how can I really be sure that what I think is happening is actually happening? And when I start going to those types of questions, I always find myself naturally wondering, like, is there a better way that I can, like, fundamentally approach this so that it's an easier task to solve or to be confident in my solution? We'll get to the laziness business later because I think that's an underrated quality for people.
Starting point is 00:05:35 Well, let's talk for a minute about the graduate work you did since you are a, what I would call, newly minted PhD and your work is really fresh. So give us an overview of the work you did to get your degree, and tell us a little of how that's informed what you're doing now. Sure. So I finished up my PhD a year ago at Vanderbilt, and I was working on making distributed computing concepts more accessible to kids. So basically we wanted to extend programming environments that are meant to have a really low threshold so that kids can get started programming really easily. So more specifically, I was focused on block space programming environments that are meant to have a really low threshold so that kids can get started programming really easily. So more specifically, I was focused on block space
Starting point is 00:06:09 programming environments. So you might be familiar with Scratch or Snap. Now, my work was trying to empower students using these environments with the abilities to build distributed applications. So this would include potentially fun and hopefully more engaging things like chat applications or multiplayer games.
Starting point is 00:06:25 It's really the sky's the limit, hopefully. You can give them the tools where they can really express their creativity and learn something cool in the process. So I think it's cool when you can appeal to other demographics by making it much more grounded so it feels applicable and relevant, while also trying to make it more social by giving them
Starting point is 00:06:42 the power to build network-enabled applications so they can actually make these games multiplayer or make chat applications, and then start to reason about the natural things that often come up. I've been in classes before where I've introduced things like client-server architecture. One that comes to mind is I was working with a group of eighth graders, and I introduced
Starting point is 00:06:59 the client-server architecture as we made a chat application during class. And it's funny how concepts come up supernaturally that are really huge. They were doing a distributed denial of service unintentionally, immediately. I mean, it makes sense. You have a classroom of students, and I set up a server,
Starting point is 00:07:15 and they can all see things. And naturally, you get some trolls in the audience who just want to find out that, well, they can use this loop that's just an infinite loop and just keep sending messages to the server? And then we have to step back and talk about, how do we detect spam? And how do we start filtering this stuff out?
Starting point is 00:07:30 But the thing that I want to emphasize about this example is not necessarily that these kids were learning to DDoS, but that these concepts come up organically in a way that they can get their hands on it and then start reasoning about how to solve this problem and different techniques, and then hopefully then evaluating the different types of solutions that they might come up with. So did they even know that what they were doing had a name, an acronym?
Starting point is 00:07:50 No, no, they didn't. Did you tell them? Yes, I did, yeah. You just did DDoS. Yeah, it's funny. You have to be pretty flexible when you're doing things like this because the class can go a lot of different ways. Sure. I've taught eighth graders. I know. It's too funny. Well, how did that work play out? And do you find it has legs and you're building on it? Yeah. So that work is a continuing project at Vanderbilt.
Starting point is 00:08:13 There are a lot of different projects and efforts that are building on top of it. For example, a year ago, we had two one-week-long summer camps that were sponsored by NSA that were focusing on cybersecurity in the context of cyber-physical systems. So more concretely, they were given robots and then we started introducing them to programming because we couldn't assume prior programming experience. So we first tried to do some basic introduction to programming and then started introducing them to programming on robots. And then after that, started getting into cybersecurity questions. So essentially grounding this a little bit more, we had a classroom of students and we had like a collection of robots.
Starting point is 00:08:48 And rather than a lot of the existing robotic programming platforms, we didn't like have a single computer have to be physically connected or associated with a specific robot. It was more like we had a sandbox of students who could interact with a group of robots and they were all assigned a specific one.
Starting point is 00:09:04 But this means that kids will start trolling each other and trying to control each other's robots and things like that. And I think it's great because this means that in the way that we design and set up the environment, we can do it in a way that facilitates the, I guess, natural development of some of these sorts of concepts. So like encryption comes up pretty naturally then if we're like, well, people keep trying to control my robot, and I want it to drive. MELANIE WARRICK- How are you going to stop that? MARK MIRCHANDANI- Exactly. And then we start talking about code breaking.
Starting point is 00:09:30 And this adversarial nature of cybersecurity lends itself very nicely to a curriculum, too, in the sense that you can introduce one sort of very natural initial fix, and then how you can counter that, and then how you counter the counter. And you can continue developing along that route when you have students who are changing the encryption on their robot really frequently
Starting point is 00:09:48 and trying to prevent replay attacks and all sorts of fun topics. Well we talked a little bit about the residency program in general, but I want to talk about what you did while you were here. Let's plunge right into your year at the Microsoft AI residency program. Yep. So I worked with the reinforcement learning team, and I was working on machine learning in games. So rather than some of the machine learning where they're trying to essentially replace
Starting point is 00:10:21 a human with an AI, we were focused more on how can we leverage machine learning to be complementary to the game development process. So not necessarily just trying to use them as a sandbox to show that we can learn complex tasks, but say, how can we actually benefit games from incorporating machine learning either into the development process or some sort of aspect of the game that complements it later.
Starting point is 00:10:41 So I was working mostly with a couple of people on the RL team here in Redmond, but also collaborated a bit with a bunch of researchers out in Cambridge, who are also working on this. Cambridge UK. Yeah, Cambridge UK. Yes. We're working on a similar effort. So my first project was focused on actually trying to get more human-like behavior in Minecraft, which was leveraging a scripted agent and very few human demonstrations. So given like 33 human demonstrations and an existing scripted agent, the question was, how can we incorporate some sort of learning into the agent, not to necessarily make it perform better, but to make it more engaging and interesting and hopefully more human-like.
Starting point is 00:11:17 All right. How'd that go? We did find some pretty positive results. And we were able to create a hybrid agent that did demonstrate more of the behaviors that we thought were interesting and engaging from the human perspective, so like contextual decision making, as well as you saw similar high-level strategies exhibited by this hybrid agent, where it was able to learn the strategies just from the human demonstration. So those aspects of it were very positive. So there is a lot of flexibility in taking some of these
Starting point is 00:11:43 data-driven approaches, and we were hoping that we could be able to, when defining a new agent in this game, use scripting for the parts that make sense to write code for, and then use data for the parts that make sense to use demonstrations for. MELANIE WARRICK- Right. MARK MANDELBAUM- There are other aspects or ways to view it that are less practical and more
Starting point is 00:11:59 trying to think about some of the fundamental challenges. And some of the fundamental challenges in this context would be that the scripted agent gave us a hierarchical policy where we could replace, in our context, metacontrollers. So basically, the part of the agent that was picking the high-level goals of the agent in this game. And you can assume that humans have a similar type
Starting point is 00:12:19 of implicit hierarchical policy. In other words, they're not just thinking whether or not they should move forward and left without any concern for a high-level plan. MELANIE WARRICK- Right. MARK MANDELBAUM- They're thinking, oh, I should go and grab this thing or interact with this. MELANIE WARRICK- Well, strategically.
Starting point is 00:12:31 MARK MANDELBAUM- Exactly. There's some sort of high-level strategy. Now, one of the challenges in this case was that we were just trying to learn the meta controller via imitation from the human demonstrations. And the lower-level policy of the scripted agent was, well, of course, scripted. So that meant that that part wasn't able to learn at all.
Starting point is 00:12:49 So now the challenging part is that we know that the lower level policy, things like pathfinding and such, might not match with what the humans are doing. But we still want to be able to imitate, which makes it challenging because you don't want to try to imitate at a lower level then. You can't just look at different states of the game and just try to compare them between the hybrid agent, which
Starting point is 00:13:09 you're trying to, again, train the meta controller, and the human demonstrations. Because given the same goal, they might have different ways of achieving that goal. So really, you want to try to essentially predict the human's intention and then try to incorporate the human's intention into the meta controller. So it tries to predict what a human would try to do given the state that it's seen so far and then reuse the scripted part to actually like achieve this intention
Starting point is 00:13:32 or this sub goal that it thinks that the human would have. So it's a tricky problem. Just a little. Yes. Well, what kinds of research thinking does that inspire? I mean, you've got to then figure out, okay, how am I going to do this? How are we going to take what's natural up here and put it in the software? So one of the biggest questions, if we're thinking about trying to like approach or tackle this question, is a bit of like, how can you actually find some sort of signal in the data that we can actually use to train the metacontroller. And in this case, it's not just some signal.
Starting point is 00:14:07 I guess it's really the right signal. So I gave the example already of if you had just tried to look at states, we could try to imagine just trying all sorts of different possibilities in the metacontroller and seeing which one gets closest to what the human was doing. But the problem there is that you start to encapsulate,
Starting point is 00:14:22 I guess, the things like pathfinding and stuff, which will just introduce noise, which isn't really what you're trying to measure. Because you could imagine that, now this is usually easier to explain with a whiteboard, but you could imagine that you see some human taking some arc to whatever its goal is.
Starting point is 00:14:38 And there's all sorts of different entities or subgoals that are possible on the screen. And you could imagine that you have some sort of scripted agent that just goes directly to its sub goal. Now, if you knew what the person's intention was, the expected behavior would be an arc in the human demonstration and a direct path in the resulting hybrid agent. But if you're not careful and you're only looking at the low level states, you would try to force the hybrid agent to take an arc. And you might be able to do this by picking all sorts of garbage subgoals really quickly to make it move
Starting point is 00:15:09 directly a little bit to the right and then a little bit forward, then a little bit back to the left, and kind of create this arc shape. But that's really not at all what you want. And it'll be especially problematic when we start looking at different humans. It's not that all humans share the same pathfinding logic or low-level sub-policies,
Starting point is 00:15:25 so it'll get even more complicated and difficult to try to learn. So the type of thinking for this project really required you to kind of step back and try to really understand fundamentally what sort of approaches could be used to try to learn really what we care about here, which isn't immediately obvious how to isolate the aspects of the human demonstrations that we care about in the context of this hybrid agent. And those parts got a little hairy. So another thing you did during the residency was extend the results of the work that you did at the outset. So what questions were left unanswered? What prompts did you sort of gather from what you learned? And how did you go about trying to answer
Starting point is 00:16:05 them with follow-up research? Sure, so I mean I think this is a pretty standard process in a lot of research where you want to try to make something work in a complex environment and you might make some compromises on the way and then you want to step back and see how many of those you can fix or resolve or weaken or loosen. So one of the things that I've been working on during the course of the second project has been trying to I guess relax some of the things that I've been working on during the course of the second project has been trying to, I guess, relax some of the requirements that we had from the first project. Now specifically, one of the ways
Starting point is 00:16:31 that we derived signal in the first project was to look at the human demonstrations and leverage a labeling function, so a concept that's been used to drive supervision for large unlabeled data sets to drive weak supervision. So you might get these noisy labels and things like that. But hopefully, given enough data, you can still learn something meaningful in this context.
Starting point is 00:16:49 Now, for the first project, I wrote a labeling function and just took the most, I guess, direct approach to trying to get signal. Like in this case, I tried to encode the heuristic that regardless if it's a human or an AI, we can probably infer what its goal is based on what it's going to move most toward in the future. So I had a labeling function that would look ahead into the future and then look at what
Starting point is 00:17:07 the human moves most directly toward. And then it would say that this is probably a decent approximation of what their subgoal was. So we can do this for every state. And then even though there might be some noise in the labels, we can hopefully learn something meaningful enough that reasonably approximates what the human's intention was. Sure. The thing that I didn't like about it was that that means that our labeling function has to have a good heuristic and a good idea of what the human was going toward. And especially if we're going to apply this into other domains, it might be harder for any arbitrary state to give me a good approximation
Starting point is 00:17:35 of what the human's trying to do now. And even though it doesn't have to be a perfect approximation, there are some sort of domains where this could be really challenging. So what I've been trying to do during the second portion of the project was relax that to the point where rather than provide an approximate signal or an approximate label for all states, see if we can just go through and if there's a state we really are confident about, we label that one. In other words, this is really just trying to ground
Starting point is 00:17:59 the human's intentions in the context of Minecraft based on some interaction. So it's reasonable to say that if the player picks up some item, they were probably trying to. If they attacked a zombie or some sort of entity, that was probably their goal. So instead of worrying about over the course of five seconds while it's doing battle with a bunch of different enemies and interacting with things, like picking up weapons and attacking other entities in the game, rather than trying to predict the label at each point and trying to disambiguate when exactly or what exact frame does it start
Starting point is 00:18:28 moving from one enemy to another or things like that, or when exactly does it start to retreat or when is it just collecting itself to attack again, rather than trying to disambiguate the labels for those states using a labeling function so directly, we just tried to relax it to the point where when we see something that we're confident about, like, again, an interaction in the game, we'll just label those and then see if using this much more sparse set of labels, we can still get meaningful results for predicting
Starting point is 00:18:56 a human's intention. The idea here is that if we're able to do some sort of pre-training and bias the network on some related task, then maybe using these sparser labels, we can just kind of fine tune the original weights, which were, again, trained on some similar task. Like that could be predicting distances or just something
Starting point is 00:19:13 that's learning more of what the heuristic that the labeling function was encoding. We could instead just bias some of these suspicions that we have through the use of training it on related tasks, and then fine tune it on this smaller set of labels that we can trust with much higher confidence because we actually see some sort of game interaction. Okay. Well, let's circle back to the laziness issue.
Starting point is 00:19:34 In some ways, laziness is the great motivator for invention. And we talked about this as sort of a recurring theme in the work you're choosing to tackle, making complex things easier for people, especially for novices and non-experts, and more accessible in the process. So can you expand on that just a little bit, why it's important to you? Sure. I mean, I can never help the irony when I find myself sitting at the computer and doing repeated tasks,
Starting point is 00:20:01 especially when a huge part of programming is about automation. Right. I mean, I guess a little bit more maybe philosophically, I don't really like when the tools that I'm working with get in the way of the problem I'm solving. Like, they should really be complementary and I shouldn't be spending time trying to, like, I don't know, wrangle or fight with the tools themselves. I guess that ideology kind of pushes me towards, like, or gives me very little patience for when I'm doing something or some repeated task manually over and over and
Starting point is 00:20:29 definitely pushes me towards finding some way to not only remove or reduce any repetition but also see if there's a way that the tools can really hopefully get out of the way so that the person can actually try to reason about and understand the problem that they're actually trying to solve I don't want to confuse this with trying to find like a silver bullet. I've run into this a bit actually in grad school when people sometimes would debate about like visual programming versus like textual programming languages or like whether or not box-based programming is like a real programming language, which I don't know.
Starting point is 00:21:02 I mean, personally, I feel like there's syntax, there's semantics. It falls under the category of programming language, even though most of the time the point that they're making is really, do you use it for industrial grade applications? Right. Which, no. No, it's... The ones that I've used and the ones that I've written, I think their strength
Starting point is 00:21:16 is in making it easy to access some of these complex topics and concepts and be able to learn really valuable things, but I'm not recommending that we go and write performance critical applications in these languages. But I do think that it's funny when people kind of get this idea of it having to be one or the other, because I think really the best set of tools are ones that play well with each other. I guess what I should say is it's not like every carpenter starts with just like a hammer, you know, like, and then tries to build a house entirely with a hammer. You need a whole full set of tools. Exactly. You need a huge tool set and you need to make sure these all like complement
Starting point is 00:21:50 each other and work together nicely. Well, I can't let any podcasts go by without asking what could possibly go wrong. And this is actually my thing, because I'm that person in eighth grade who would have tried to hack someone else's robot. Just because, you know, do we all use our powers for good? Some of us use them for evil. Just saying. Not full on evil, but just, you know. Yeah, certainly.
Starting point is 00:22:26 Anyway, so is there anything about the work that you're doing that keeps you up at night? Any potential downside? And if so, how could you from the get-go build in safeguards to mitigate that? Yeah, so that's a really good question. I mean, a lot of things that I really liked about the work that I've done in the past, and this gets a little bit more into my, I don't know, maybe preferences and ideologies about, like, software and ecosystems and community and all that good stuff, but I really like when these sorts of tools and concepts
Starting point is 00:22:54 can be really accessible to people across the board. You know, like, in undergrad, I was a math major, and I got a teaching licensure, and my first teaching placement was at a school in need of assistance. So, like, a very large minority demographic. Some of them would come from like really rough childhoods and really unsafe home environments and things like that. One of the things that I really enjoy about a lot of the work that I did at Vandy and trying to make these more complex topics accessible and some of the things that I really like about open source software in general is this idea of basically being able to give people the resources or the abilities to be able to get involved and to learn a lot without any regard to a lot of the social issues that can get in the way.
Starting point is 00:23:37 Now, that being said, a lot of the work that I've been doing has been publicly available. It's free. If people want to use it, they can just use it. But this does go hand in hand with potential misuse. Now, in my case, since I'm mostly trying to empower people to be able to do things that they couldn't do before, it's not necessarily a social platform. So there are different types of risks and issues. But it is challenging. I mean, just because you teach people some of these like cybersecurity concepts, it doesn't really guarantee that they won't try to be on the hacker side rather than the white hat side.
Starting point is 00:24:07 I haven't really built in any safeguards right now. I'm concerned about taking away from the people who could really benefit from it, and you already have enough challenges. So that makes it really challenging. I always hope that these types of things can be addressed with ideally just more of developing like a positive culture and community and people buying into that. Because you hope that people aren't only doing the right thing just because it's not against the law or it's the only thing not against the law. I'd rather people are motivated to help the community
Starting point is 00:24:40 and get involved. And when doing things like that, I really don't want to have anything get in the way. You know, it's those are the kind of scenarios where I feel like I just want to like, if we can pave the road as much and as far as possible, and then hope we can build a community around this and an ecosystem where people want to do good to the people around them. But I realized there'll be cases where people might fall through the cracks and this might be a little bit more ideological, but that's what I shoot for or strive for. All right. Well, I always ask my guests to give us a little bit of their personal story.
Starting point is 00:25:09 And you've talked a little bit about what you've done prior to this with your doctoral work and a little bit about the residency program. But give us a little more context on your background, where you're from, and how you ended up in the Microsoft AI residency program. So sometimes it's funny looking back. Things can seem a little bit arbitrary. I mean, I grew up on a dairy farm in Minnesota. I wasn't sure if I wanted to go to college or not, but other people seemed to be and then there were wrestling coaches recruiting me. So that made it easy. Then I got an academic full ride, which also made it easier. So I decided to go
Starting point is 00:25:42 for my bachelor's in math. Then I was planning on going back and teaching high school math and coaching wrestling. And I had taken a couple of computer science classes. I took like the intro course and a data structures course and a programming languages course, and then found out there'd been a few professors in undergrad who had asked like why I wasn't planning on going for like my PhD or going to grad school or anything like that. And I always said that I didn't really want to invest any more money in an education if I didn't see how it would pay out. So I was a bit more, I guess, pragmatic, perhaps given my background and a lack of exposure to those kinds of things before. But then my senior year, one of the professors was asking
Starting point is 00:26:16 me in more detail about why I don't want to go and stuff. And then when I mentioned that I didn't want to invest more money in an education if I didn't see necessarily the payout. She said that a lot of PhD programs are covered and you can get like a stipend on top of it. And then I figured I might as well apply. Yeah. So I had enjoyed doing stuff with computer science. So I thought I would apply to computer science PhD programs, even though I didn't major or minor in it. And then I actually heard about the Microsoft AI residency while interviewing on site for the Google AI residency.
Starting point is 00:26:47 Oh, really? And then came back and applied to this one. So sometimes it's funny looking back because sometimes the path to get here can seem pretty arbitrary. Non-intuitive. Yeah, definitely. But I'm excited to be here. Well, on the heels of that question, my new favorite one is the one I'm going to ask you next. What's one interesting thing about you that we couldn't find on a web search, maybe, that has influenced your career? So, I mean, I'd bring up wrestling, but when I was student teaching and running studies, the students liked to Google me and find pictures of me wrestling and then set it as their background or incorporate it into their game. So you can definitely find that on a web search,
Starting point is 00:27:28 but it's hard to really give enough credit to some of the experiences that you can have that really play a role in your career development. Like, although being a dairy farmer or growing up on a dairy farm doesn't really seem to be maybe most closely related to doing research in AI and machine learning, I think there are certainly a lot of different attitudes and perspectives that can be really positive. And I think that in some ways, it can keep me very pragmatic in the sense that I really like seeing some outcome or the benefit of what I'm working on. And I think in those ways, being able to work with kids and just trying to help give them all the
Starting point is 00:28:05 tools to be able to succeed, regardless of background, is something that can have some sort of positive impact. And I think that some of the experiences and the pragmatic nature growing up on a farm played a little bit of an influence in keeping me grounded that way. My husband's father used to say cows never go on vacation. That's true. My wife and I have been together for 12 years. Basically, I got a driver's license and a girlfriend. So like when we were dating, a lot of times it was hard for me to get out of chores and things like that. So we spent a lot of dates milking cows and feeding cows and doing all sorts of things like that. Brian, this has
Starting point is 00:28:42 been so much fun. As we close, you're much closer to the beginning of your career than many of the guests I've had on the show. So I want to put a little different spin on the what advice would you give to emerging researchers, because you're kind of one. And I'll simply ask you, what's next for Dr. Brian Broll? Yeah, so I'm going to be a research scientist at Vanderbilt starting in September. And I'll be continuing some of the work, try to increase the impact of some of the work
Starting point is 00:29:10 that I did during my PhD, as well as explore ways that we can try to make deep learning more accessible, especially to people in the natural sciences. So I'll be working on those immediate projects. But I'm definitely interested in seeing other ways that we can try to combine some of my experience here. We're getting more involved with AI and machine learning with some of the perspectives and kind of driving ideas that I've had throughout grad school and about trying to make these tools more powerful and accessible so that hopefully they can have a
Starting point is 00:29:40 bigger impact more broadly. Let me ask you a question I was going to ask before, but I really do want to hear what you think. What was your favorite thing about the residency program and what would you advise people that might be interested in ending up at Microsoft's AI residency program? So my favorite thing is probably the diversity of research projects that are out there and being able to work so closely with so many very impressive researchers in so many different areas. I think it's really great
Starting point is 00:30:11 to be able to have that sort of exposure, especially when you're trying to learn more about different aspects of AI and machine learning. It's really hard to replace being able to work so closely with so many different researchers here at MSR. Brian Bruhl, thank you for joining us today. It's been illuminating and delightful. Thanks. It's been a pleasure. To learn more about academic programs with Microsoft Research, visit microsoft.com slash research.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.