Microsoft Research Podcast - 087 - HE compilers for Private AI and other game changers with Dr. Olli Saarikivi

Episode Date: August 28, 2019

As computing moves to the cloud, there is an increasing need for privacy in AI. In an ideal world, users would have the ability to compute on encrypted data without sacrificing performance. Enter Dr. ...Olli Saarikivi, a post-doctoral researcher in the RiSE group at MSR. He, along with a stellar group of cross-disciplinary colleagues, are bridging the gap with CHET, a compiler and runtime for homomorphic evaluation of tensor programs, that keeps data private while making the complexities of homomorphic encryption schemes opaque to users. On today’s podcast, Dr. Saarikivi tells us all about CHET, gives us an overview of some of his other projects, including Parasail, a novel approach to parallelizing seemingly sequential applications, and tells us how a series of unconventional educational experiences shaped his view of himself, and his career as a researcher. https://www.microsoft.com/research

Transcript
Discussion (0)
Starting point is 00:00:00 When we looked at homomorphic encryption as a target for training, as we were doing in Parasail, we noticed that there's actually a lot of other lower-hanging fruit that we can do for homomorphic encryption. And instead of training, we started looking at inference. So how do you even evaluate a neural network model on top of homomorphic encryption, which is a thing you need to be able to do before you can actually do training. So what Chet is doing is it is building a compiler for homomorphic encryption that automates many of these concerns that we would otherwise have to deal with by hand. You're listening to the Microsoft Research Podcast, a show that brings you closer to
Starting point is 00:00:45 the cutting edge of technology research and the scientists behind it. I'm your host, Gretchen Huizenga. As computing moves to the cloud, there's an increasing need for privacy in AI. In an ideal world, users would have the ability to compute unencrypted data without sacrificing performance. Enter Dr. Ali Sarakivi, a postdoctoral researcher in the RISE group at MSR. He, along with a stellar group of cross-disciplinary colleagues, are bridging the gap with CHET, a compiler and runtime for homomorphic evaluation of tensor programs that keeps data private
Starting point is 00:01:25 while making the complexities of homomorphic encryption schemes opaque to users. On today's podcast, Dr. Sarakivi tells us all about Chet, gives us an overview of some of his other projects, including Parasail, a novel approach to parallelizing seemingly sequential applications, and tells us how a series of unconventional educational experiences shaped his view of himself and his career as a researcher. That and much more on this episode of the Microsoft Research Podcast. Oli Sarakivi, welcome to the podcast. Thank you. So you're a postdoc researcher in the RISE group, which is Research and Software
Starting point is 00:02:13 Engineering, and you're interested in, and I quote, distributing ML training with semantics preserving parallelization and advancing private AI with homomorphic encryption. Yes, and that was kind of the short story. I'm touching a lot of other things currently also, but yeah. If that's the short story, we're in trouble. There's a bunch of stuff in there that we'll unpack as we get into the specific ways that those are playing out in the research you're doing. But for now, I want to start kind of broad strokes. Tell us what big questions you're asking, what big problems you're trying to solve, what gets you up in the morning. So currently, all of my projects are in some way or the other about performance. And it's not just about looking at a specific application and figuring out what's the best way to get this to
Starting point is 00:03:01 run fast. It's about finding ways to make performance accessible to developers. And while there was that privacy preserving thing, they were looking at making homomorphic encryption, which is a technique for preserving privacy, more accessible while giving them a good performance when doing that. Okay. So let's go in a little bit further on that, because I've had several of your colleagues in RISE on the podcast, some of my favorite people at MSR, and they're working on problems that I care a lot about in terms of testing and verification in software, which I know you've had a lot of experience in those areas as well. But your most recent work has
Starting point is 00:03:40 shifted to the thing you just mentioned, performance. And so I want to know what you mean by that and what prompted the pivot from your interest in testing and verification to performance. Yeah, it is a very broad term. So indeed, my background is in program analysis topics like symbolic execution, software verification, that kind of stuff. But it was actually when I came to Microsoft, we worked on this project for optimizing stream processing programs. And here it's actually turned out that the same techniques that we used for analyzing programs for safety also work for analyzing programs for applying optimizations. So performance really is about performance as it's unlocked by powerful compiler analysis. And I think these kinds of topics are becoming more and more important as the computing landscape
Starting point is 00:04:33 gets more and more heterogeneous. We're getting GPUs and FPGAs and all kinds of accelerators. And these are hard to use. And really, we need to start thinking about these problems in like a very domain specific way or what are the like specific constraints of what are we compiling to. And it's a thing that you can make more accessible to a user if you can provide like good abstractions on top of it through powerful compiler optimizations. Right. The homomorphic encryption libraries that we're using are getting implementations on top of GPUs and FPGAs. On a very high level, it looks a lot like one of these accelerators. You get things like a very constrained programming model, weird performance constraints, and all of these kind of low-level details that a typical developer has a hard time grappling with. And this is the part where
Starting point is 00:05:28 building a good compiler for it can help in the same way that having a good compiler helps target a GPU if you don't have to write the lowest level of code and you can use a bit of a higher level language. So that's kind of what we want to do for homomorphic encryption, which is kind of just another target in the landscape of heterogeneous computing. Let's talk about that developer right now. Every researcher has a reason, usually in the form of a group of people, for the work they're doing. So is that how you would define your target audience as developers? Who are or who do you think will be the main beneficiaries of the work you're doing? So for the work we're doing with homomorphic encryption, it is definitely
Starting point is 00:06:10 developers in some specific domain. Let's say you're a developer working for a bank and you want to increase your privacy by adopting homomorphic encryption. Now, the thing is that that developer probably will not be a cryptographer who's like intimately familiar with all the details of homomorphic encryption, but they have all of this domain specific knowledge for their own domain. And now we want to enable them to effectively use tools from homomorphic encryption in their own domain without burdening them with all of the crypto-specific details. So they can be using the tools but not being experts in the science behind the tools. Yeah. And that's really the aim of any compiler project. Like for traditional programming, it's that you don't want to force people to use assembly for their coding and instead use,
Starting point is 00:07:01 I don't know, C-sharp or something. Well, let's talk about stream processing for a minute because I want to land on a couple of other big projects that you're involved in that are really cool. But this area of efficient stream processing is something that you've done a lot of work in. Give us an overview of the high points of what you're doing in this area.
Starting point is 00:07:19 What are the technical underpinnings, motivation, rationale, and what do you hope the outcomes will be? So this is work that I did during my two internships at Microsoft Research before I became a postdoc. So again, the point here is to make performance accessible without having to kind of go in and do all the low-level details yourself. So the idea here is that in these stream processing applications, like let's say you want to parse a log file and then do some kind of processing on the things you've parsed out of it,
Starting point is 00:07:50 and maybe then do a query on top of that and then encode your data and then write it back to disk. So you have kind of like many stages when you process input into output. And a nice way to write these kinds of programs is to write it as separate stages. It can actually happen that that's not the best way to do it for performance. One reason is that if you write it as separate stages, you typically get some kind of buffering in between the stages. Sometimes buffering is good, but typically you would get kind of excessive buffering if you just write it in these small stages that you compose together. And another reason is that there's a lot of opportunities that you're leaving on the table. Let's say you have a very defensive coded component somewhere later on in your pipeline
Starting point is 00:08:33 that does all of these kind of checks on the input that it's properly formatted or whatever. But if you now compose it with some component that is actually correct and guaranteed to produce properly formatted data, then all of those defensive checks in this latter component are unnecessary. Like you should just remove them. But having the developer remove them by hand is, it's a lot of work and then your code becomes less modular. So what we do instead is that we actually compile all of these stages separately into this model of computation called symbolic transducers, which is very suited to representing stream computations. And the nice thing about this is that we have a fusion operator defined, which
Starting point is 00:09:18 allows us to combine many of these stages as symbolic transducers into one big symbolic transducer. It's basically a form of inlining kind of with some fancy solver-assisted optimizations happening inside there. And now that we have this one big transducer that's fused, and they do get big, like even though we have the solver helping, there is a blow-up. But now at this point, we can actually start applying these program analysis-based optimizations onto that. For example, looking at reachability of certain control states and start pruning and removing stuff from that symbolic transducer. And this allows us to implement
Starting point is 00:09:56 these optimizations that were not available when these stages were considered just separately. And then when we generate code for this, we can actually get some very efficient code that does these inter-stage optimizations and removes buffering and stuff like that. Okay. Who does this matter for the most? So the targets for this kind of thing is mainly when you are actually dealing with enough data that throughput matters.
Starting point is 00:10:21 So typical things might be cloud query applications. So we were actually looking at an internal database system for integrating this into systems where you are already burning like a lot of computational power in like running queries against your system and you want to reduce that to a lower level to save money. Yeah, I was going to say it saves time and money.
Starting point is 00:10:42 It saves time and money. Well, time is money. Yes. In computation and other areas. So are there other areas like in the regex field logic comes out? Yeah, so that is actually a direction we took this project. So we actually looked at doing regular expression matching using theory based on the symbolic automata, which is actually important because regular expressions are not just over some small alphabet. If you're dealing just with ASCII, which is 128 or extended ASCII, 256 characters, you're fine dealing with it concretely. But symbolic automata allow you
Starting point is 00:11:18 to deal with unicode and larger alphabets, which is the reality in dealing with strings and doing pattern matching these days. And there's, again, like all of this automata theoretic algorithmics available for optimizing these symbolic automata. So you're able to get some very efficient regular expression matching routines out of that. And yeah, that was actually a very fruitful line of work. We're currently beating RE2, which is this well-known kind of default library that people go for high performance, regular expression matching.
Starting point is 00:12:03 Well, let's talk about another cool project you've been a part of called Parasail. And your agenda with the project is, and I quote again, high-performance ML and encrypted ML. What is Parasail and what are the research challenges? And how does it defy conventional wisdom? So it's actually a very interesting project. It's more of a meta project rather than like just one specific project. So Parasail is a line of research that is concerned with parallelizing seemingly sequential computations. And the idea behind that is that if you take a sequential computation,
Starting point is 00:12:42 which basically means that there's some state that evolves through like a sequence of computational steps. It seems like on the face of it hard to parallelize this because you have this sequential dependency with the state getting threaded through each of the steps of computation. But it actually turns out that you can do symbolic execution on each of these stages given a concrete input. And by doing symbolic execution such that you kind of assume that the starting state is unknown, you can do some pre-computation based on that input. And this allows you to parallelize a lot of the computation before you actually have to do this final sequential step of stringing together these individually executed steps. So why is that important? So there's a lot of data in the world.
Starting point is 00:13:33 For example, doing large aggregated queries over cloud-scaled databases, which have data splits onto terabytes and terabytes of data style things. And then obviously machine learning. There's huge amounts of data available for machine learning and you want to parallelize these kinds of processes. So actually it's this first thing that I mentioned that I got into the Parasail project through. So we were actually looking at parallelizing streaming computations
Starting point is 00:14:02 represented as symbolic transducers, which is that other line of work that I was mentioning. But it would also be useful to paralyze that. And now the idea is to do symbolic execution on a symbolic transducer to do the paralyzation. Now, if we actually look at the machine learning part, it's very different. The ideas are kind of same on the very high level, but instead of symbolic execution, we're looking at a second order optimization method, which has kind of the same flavor, but it's not the same tools. And this is why I find this project especially interesting. The meta level idea of this kind of parallelization has been very fruitful. And when we look at specific problems, for example, the stream
Starting point is 00:14:49 processing stuff or machine learning stuff, you end up with very different instantiations of the same idea. So we're kind of getting a lot out of this very simple idea just by applying it to different domains. And to be clear, it is a lot of work to apply it to a new domain, but it kind of sets the framework for the research. Yeah, to be clear, it's a lot of work in general, and then you've got new domains to fit it to, right? Yeah, yeah. Well, let's switch streams again. I've had a couple of guests on the podcast who specialize in homomorphic encryption, which we've talked about briefly, but I'm not going to assume that all our listeners know exactly what that is. So while it's not your core expertise, it is central to the project we're going to talk about next. Give us a quick remedial course on homomorphic encryption and how it works, including the difference between so-called regular homomorphic encryption and this flavor
Starting point is 00:15:40 that you're working with called fully homomorphic encryption. Yeah. So it actually turns out that many existing encryption schemes are slightly homomorphic with respect to some operations. So let's take as an example RSA, which is a commonly used encryption scheme. So it has the property that if you encrypt an integer A using RSA and you get a ciphertext for A, and then you encrypt an integer B also with RSA, so now you have two ciphertexts. So you can multiply these two ciphertexts together. So RSA has a special homomorphic property that if you multiply two ciphertexts together, you get a new ciphertext that is the encryption of what the multiplication of A and
Starting point is 00:16:26 B would have been. So what in effect you have done is that you have done computation on encrypted values. And the magical thing is that you didn't need the secret key to be able to do this compute. So homomorphic encryption is a form of encryption that allows you to do computation on encrypted data without having read access to the data. The thing is that if you just have multiplication, that's not very useful by itself. For example, if you want to evaluate a polynomial, you need both multiplication and addition. And that is actually the hard part for the cryptographers to arrive at. There's a lot of examples of encryption schemes that give you either addition or multiplication, but an encryption scheme that gives you both is a relatively new thing. So the first homomorphic encryption scheme that supported both operations
Starting point is 00:17:15 and could be called fully homomorphic was introduced 10 years ago. And the encryption schemes have come a long way since then. So now we have encryption schemes that support both addition and multiplication of encrypted integers. The thing is that it is still a bit slower than normal computation. But the great thing about it is that it gives you a trust model that really nothing else can. So with homomorphic encryption, you keep the secret key, you don't give it to anyone, and you only have to trust the math, basically. I want to do a little bit of a detour and bring the issue of privacy front and center, because good artificial intelligence requires data, and a great deal of data is stuff that we gather from what we do on the Internet or put in the cloud.
Starting point is 00:18:00 And without certain safeguards, like homomorphic encryption, generally things are not private, right? Yeah, that's true. So there's an urgent need in my mind, at least, for a new paradigm. And this is what people are calling private AI. What is private AI and how can it help us? So private AI is actually a very broad term. And it is very broad because there are lots of very different kinds of privacy concerns. So if we take, for example, homomorphic encryption, what homomorphic encryption allows you to do is
Starting point is 00:18:32 you can make some parts of your data encrypted. So as a privacy concern, it addresses the concern of your data being leaked as you hand it off to someone else. So if you encrypt it with homomorphic encryption instead and you can do your AI on the encrypted data, then you've kind of plugged that one hole. But now there's other kinds of privacy concerns. For example, let's say someone uses your data as a part of their training set for training their machine learning model. Now, even if your data doesn't get leaked, it gets somehow integrated into like a part of the model that's getting trained because that's what training means. It has to learn something about your data. But you wouldn't want someone to be able to fully reconstruct your data just by looking at that model. And
Starting point is 00:19:22 this is a form of privacy that is addressed by something called differential privacy. And as a technique, this is completely orthogonal to homomorphic encryption, and it addresses a concern that is very orthogonal to what homomorphic encryption can address. So I think for private AI, the goal should be to, first off, look at what kind of privacy problems there are in AI and provide education to users. Because the problem is that users don't even realize what kinds of privacy problems there can be. And now, if you've identified the problems, then obviously, we should also find the solutions for them. We have to fix it. But the education aspect is actually very important for private AI,
Starting point is 00:20:09 making people realize what are the actual implications of giving up their data or using it in training. Okay. Well, let's talk about Chet now, not the guy who works in men's suits at Nordstrom, but the, and I quote, optimizing compiler for fully homomorphic neural network inferencing. Again, a mouthful. I love it. And alternately, you call it an
Starting point is 00:20:31 optimizing compiler for practical and interactive private AI. It's a tool stack for cloud inference on private data. Give us a virtual or 3D view of chat and unpack the stack for us? Yeah. So this is actually a project that got started with this parasail research that I was talking about previously. So it actually turns out that doing parallelization helps with homomorphic encryption because homomorphic encryption behaves better with parallel computations than serial computations. So this is how we got into working with homomorphic encryption in the first place. Then when we looked at homomorphic encryption as a target for training, as we were doing in Paracel, we noticed that there's actually
Starting point is 00:21:16 a lot of other lower hanging fruit that we can do for homomorphic encryption. And instead of training, we started looking at inference. So how do you even evaluate a neural network model on top of homomorphic encryption, which is a thing you need to be able to do before you can actually do training. So what Chet is doing is it is building a compiler for homomorphic encryption that automates many of these concerns that we would otherwise have to deal with by hand. For example, it selects encryption parameters automatically based on the program you want to run. And now it turns out that neural network inferencing is actually something that maps well onto the capabilities of homomorphic encryption. So it's a very attractive
Starting point is 00:22:03 application to look at. Okay, so let's back up just a little bit and say, does CHAT stand for something? A compiler for homomorphic encryption? Yes. So the T is coming from tensors. So it's a compiler for homomorphic evaluation of tensor programs, which is kind of like a more programming language flavored term for neural networks. So I've seen posters and decks and explanations, and it's super technical. It sounds easy when you say automating it. There's a lot of work that goes into getting it so that it's automated and someone could use it that didn't have the expertise you do? So there is a lot of work, but I feel that the work is actually in understanding the problem.
Starting point is 00:22:51 Actually, in Chet, most of the techniques are rather traditional compiler techniques. I think the magic comes in like finding the right things to automate, finding the right abstraction to expose to the user and then just implementing it but there's a lot of kind of low-hanging fruit in this space to be done all along the tool stack and this is kind of what we are addressing figuring out what the problems are and then just taking in rather standard techniques to address them now i'm glossing over details there's also like hard problems in this space. But so far, the first things we've addressed are kind of simple things that no one had really looked at before. And I think the reason that this project has been very fruitful is that it's a collaboration
Starting point is 00:23:35 between the right people. So we are collaborating with the homomorphic encryption group who have the knowledge of homomorphic encryption and can explain the ideas to us and what needs to be done and what is kind of difficult to do. And then if we need to solve some problem, they can explain the mechanics of it. And we are coming in as programming languages, compilers, people, and we have the know-how on how to build programming language tool stacks on top of computational targets. And we can just view homomorphic encryption as a software CPU. It's an emulator for a CPU with a very limited instruction set that just happens to provide security if you use it. And we can just target that as any other
Starting point is 00:24:20 compiler would. How do you feel when you are working with other people with different expertise and you start to get what they're doing? I'm sensing, like you said, this project informed this project and, you know, it turns out that this helps that. And I see this flow of research and discovery, but it's pulling from different disciplines and maybe applying a lens to a problem that you didn't before, and then it expands your own horizons. Yes, definitely. It has been a lot of learning. So just as an anecdote, when I came to Microsoft Research for my postdoc, I had no experience on machine learning or homomorphic encryption previously. I started out with a project chat that was doing machine learning on top of homomorphic encryption.
Starting point is 00:25:14 So it was combining two applications areas, both on the compiler front and the backend that I had no experience with. But the key part is that I had knowledge about programming language techniques there in the middle, and that's allowed me to be effective in this area. So there was kind of a relative advantage for me working on this problem, but it was also a lot of learning. And now do you feel like you know just enough about those other areas to be dangerous? Yes, I would say so. I would only be dangerous at a cocktail party where I could say homomorphic encryption and some people would think I'm way smarter than I am. Yeah, saying those words at a reached the part of the podcast where we discuss the potential outcomes of your research, both intended and unintended, and talk about what could possibly go wrong. Is there anything about your work that, quote unquote, keeps you up at night? Anything that concerns you that we ought to be thinking about or that we hope you're thinking about? Yes. So most of my projects are rather safe. I cannot
Starting point is 00:26:28 see how they would go horribly wrong. But obviously, the homomorphic encryption part could go like horribly wrong. Since we are building a compiler for targeting homomorphic encryption, it places a lot of responsibility on us to actually get it right. Now, homomorphic encryption is a thing that is relatively easy to use in a safe way, meaning that most of the mistakes you make will not actually leak user data. They'll make it return garbage, which is kind of a nice property. But there's a few things that we still need to be careful with. For example, we need to be careful to select correct encryption parameters. So that is something we've put a lot of thought into. And there's also like a defense in depth
Starting point is 00:27:12 kind of perspective here. So Chet selects encryption parameters and then SEAL, which is Microsoft's homomorphic encryption library, checks that those have been selected correctly. So there's kind of two stages in that. Another thing that is important in this space is open sourcing. So the SEAL encryption library is actually currently open source. It's available on GitHub. And we are in the process of open sourcing chats. And really in this kind of a space where people are really concerned about their privacy, these kinds of projects really need to be open source. There's no other way about it. It really just buys trust. If someone is going to use it, they should be able to see what it does.
Starting point is 00:27:55 Well, tell us a little bit about your personal story or your journey from Finland to Redmond. What got a young Olli Saarikivi in Helsinki interested in computer research and how did he end up at Microsoft Research? So I actually started out studying physics. So I did a year of that and noticed that it was not for me. I think the main reason was that I wasn't actually very interested in the day-to-day of doing physics, but I was interested in the day-to-day of doing computer science, which is programming, and I enjoyed the process of it. So that's what really the early driving factor into going deeper into computer science. Then I worked with a professor, Keio Helenko, who had an extensive background on model checking, and that's how I
Starting point is 00:28:42 got into this testing and software verification stuff. And in the field of testing and verification, Microsoft Research is the best in the world. It has like a significant selection of the best people in the world, specifically in the RISE group in this field. So you couldn't not be aware of Microsoft Research when working in this area. And I think the biggest thing that struck me about the papers I was reading coming out of Microsoft Research was the access to important problems. So inside Microsoft, there's obviously lots of software getting written. So the field of program analysis just has a lot of material to work on. And that is so important. So working at a company like Microsoft is really an advantage. So that's why
Starting point is 00:29:31 I wanted to get into Microsoft Research. And now having gotten into it, it's amazing. You work with such great people. I love it. So what was your educational background? Where did you go to school and how did you, you know, wander across the pond as it were? So that's actually an interesting question. I started out school in India, which I hadn't mentioned before. No, you didn't. And the school in India was an American school, the American Embassy School, which is a very interesting school. It's basically like a piece of America just transplanted into the middle of New Delhi. And on the first day going to school there, I did not know a word of English. So I was just kind of thrown into the deep end, basically. But that gave me a great background in English. And I think that has actually helped me a lot as a researcher, because you have to communicate so much. Research
Starting point is 00:30:30 is like all about communication. So having a strong grasp of the language has helped me. So I'm intrigued now. How did you go from Finland to India to an American school? Was there some family connection? Yeah, so it was actually my father who was an engineer slash manager at Nokia and he got sent on an assignment to India to set up the business there. So that was only two years or much. Then I came back to Finland. And went to university there. Yeah. So all of my university schooling has been in Finland. Including your PhD. Yes. And then the internships were what brought you here?
Starting point is 00:31:08 Yeah. So I managed to get an internship during my PhD studies, and apparently it went well. So I managed to get another, and then they finally hired me on as a postdoc. Well, actually, the question I was going to ask you about what one interesting trait has helped you in your career as a researcher, that little tidbit about an American school in India from a Finnish guy is pretty close to the funniest thing I've heard. But is there anything else that you have, like some tidbit that we couldn't find about you on a web search that has actually influenced your career as a researcher? Yeah, I actually went to boarding school for my high school. find about you on a web search that has actually influenced your career as a researcher? Yeah, I actually went to boarding school for my high school, but it was a strange kind of boarding school. It was a STEM-focused boarding school in the middle of the Finnish countryside, like no civilization for kilometers around it. Takes in 20 students a year and everyone's a nerd, basically. And that was the first place where I realized that there's a lot of people who are a lot smarter than me. So I was
Starting point is 00:32:13 like mediocre in that school, which I think was an important formative experience. As a researcher, it teaches like intellectual humility, which I think is a good trait to have. Yes, it is. So 20 kids? 20 kids a year. Yeah. And how were you selected? There was a selection process. Yeah, you had to take a test, basically like high school level math. I think they also select people a lot due to personality.
Starting point is 00:32:42 Like they need to find 20 kids that can actually live in a boarding school, going home only every second week. So personal character and, you know, get along ability. People who fit in into the group well. So would you say prior to that, that you thought you were the smartest kid on the planet? Yeah. So in the grades before that, there was like maybe one kid who kind of rivaled me. So I think that's a typical experience for many people going into research early on. And I think it's important to get fast into the stage where you're not the smartest person in the room. As we close, I want to give you the last word.
Starting point is 00:33:21 So here's your chance to say anything you want by way of advice or inspiration or maybe wisdom or warning to our listeners, many of whom are somewhere close to where you are on your career path, maybe a little bit behind. What would you say to emerging researchers who might be interested in following your footsteps? Well, as you've heard now, I've touched many topics in my research, but I think one thing that has helped me in each of these topics is working on a project that someone actually cares about, that there is some user that you can picture in your mind that you can sympathize with and you can like think of what their problems are. It changes the research question from what tricky problem could I solve into more of like what tricky problem do I need to solve for this specific application that I'm looking at. And it just makes
Starting point is 00:34:18 going deep into the research problem so much easier because you can just imagine your problem and look at what the problems to solve are. And it also makes writing papers easier. Your motivation niin helpommaksi, koska voi vain ajatella ongelmaa ja katsoa, mitä ongelmia on löytää. Se myös tekee kirjoittamista helpommaksi. Motivaatiosessi tulee ilmaisuudessa, kun on todellinen motivaatio. Toivon todella yrittäjien tekemisestä työskentelemään, jolla ongelmien ongelmaa onnistuu hyvin ja löytää ongelman, joka on oikeastaan hyvin motivoitunut. Se on varmasti työtä tehdä. Voit vain ottaa ensimmäisen vaikean ongelman, joka tulee,
Starting point is 00:34:44 mutta eivät kaikki vaikeat ongelmat ole oikeastaan hyödyllisiä ongelmia löytää. It is work to do that. You could just take the first hard problem that comes along, but not all hard problems are actually useful problems to solve. Olli Saarikivi, thank you so much for coming in. Thank you very much. To learn more about Dr. Olli Saarikivi and how researchers are inviting efficiency to the privacy party, visit microsoft.com slash research.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.