Software Misadventures - From High School Suspension to US Chief Data Scientist | DJ Patil

Episode Date: March 5, 2024

Known for coining the term “Data Scientist”, DJ is a renowned technologist with a diverse background spanning academia, industry, and government. Having led product teams at companies like RelateI...Q and LinkedIn, DJ was appointed by President Obama to be the first U.S. Chief Data Scientist where his efforts led to the establishment of nearly 40 Chief Data Officer roles across the Federal government, new health care programs as well as new criminal justice reforms. We discuss: “Dream in years, plan in months, evaluate in weeks, ship daily” High school misadventures that shaped DJ’s world view Under-hyped opportunities in AI Building with the customer vs. “if you build it, they will come” Do we need more regulations on AI? Much more.   Segments: [0:01:48] Picking locks in high school. [0:07:15] How can we make it easier for others to take a risk on us? [0:11:29] How do you decide whom to take a chance on? [0:14:24] The 70-20-10 framework for choosing what to work on. [0:17:49] "No rules, only guidelines." [0:24:09] Developing personal ethics. [0:30:52] Building with the customer versus "if you build it, they will come." [0:34:51] "Dream in years, plan in months, evaluate in weeks, ship daily." [0:43:56] Ideas should be considered in terms of momentum. [0:46:11] Under-hyped trends in AI? [0:51:53] How does AI need to evolve to operate in fields that require very low margins of error? [0:56:09] Concerning advances that lack sufficient guardrails? [0:58:55] Do we need more regulations on AI? [1:02:48] "Failure is the only option."   Show Notes: DJ Patil on Linkedin: https://www.linkedin.com/in/dpatil/ The card that DJ carried in his notebook: https://twitter.com/DJ44/status/819316928623902720 DJ’s interview series with thought leaders in Data Science: https://www.linkedin.com/learning/data-impact-with-dj-patil/data-science-how-did-we-get-here   Stay in touch: 👋 Make Ronak’s day by leaving us a review and let us know who we should talk to next! hello@softwaremisadventures.com

Transcript
Discussion (0)
Starting point is 00:00:00 Ideas I think about in terms of momentum. What is momentum? Momentum definitionally is mass times velocity, right? Mass you can think of as people or dollars. Velocity is a vector, so which way is it pointed? How far is the scalar? And so you get kind of three things that you get to play with there. And so I'm like, can I add more people to this? Can I add more dollars to this? Can I change the direction? Can I see how that scaler's gone? Where is it so far? And then that gives me the ability to say, this is the range of where the momentum can go. Welcome to the Software Misadventures podcast. We are your hosts, Ronak and Gwan. As engineers, we are interested in not just the technologies, but the people and the stories behind them.
Starting point is 00:00:52 So on this show, we try to scratch our own edge by sitting down with engineers, founders, and investors to chat about their path, lessons they have learned, and of course, the misadventures along the way. Hi everyone, it's Guang here. In this episode, we're honored to chat with DJ Patil, known for coining the term data scientist and being appointed by President Obama to be the first U.S. chief data scientist. In this conversation, we explore how high school misadventures led to frameworks for decision making that DJ has leveraged throughout his career, learnings from his time at the White House, and of course, his thoughts on what's going on in AI.
Starting point is 00:01:31 Without further ado, let's get into the conversation. DJ, super excited to have you with us today. Welcome to the show. Thank you for having me. Glad to be here. So you are incredibly accomplished today. What we wanted to do is rewind the clock a little bit and go back to the early days, specifically high school. We saw some of your commencement speeches, and you
Starting point is 00:01:50 mentioned that you were not a straight-A student, and at some point, you were suspended from a class. I think it was an English class for perhaps throwing a stink bomb, and then something happened. Confirmed. Kicked out of my algebra class uh and that happened to read my rights uh yeah so yeah tell us more about that time i just want to point out for the record i am really good at hot wiring a car and pickpocketing locks uh wait how did i think the statute of limitations is over. But yeah. Where did you learn that? Well, I learned to pick locks.
Starting point is 00:02:30 You know, this is in a pre-internet era. What was, I really kind of, there was these books like around about lock picking. And lock picking is a puzzle. It's a problem solving thing. And I actually, I got introduced to it early on by my grandfather who would you know those bicycle there used to be those bicycle locks with the tumbler system they're kind of cheap and they have like you know three or four kind of tumblers and i i learned that if you could listen you know by through him is like the kind of classic you could listen you could rotate it
Starting point is 00:03:00 and you could hear it click and you could find, figure that out. And so I'd gotten pretty good at that and realized like, you know, you could just really have fun with people by moving the bikes around or switching locks. It's just like, you know, just cause you could have fun with it. And so then, you know, that graduated to like, actually, how do you pick locks for, for, you know, on a door. And it just became a problem a skill kind of problem thing i wasn't really into doing damage or anything really dangerous or problematic but
Starting point is 00:03:34 it was a fun thing and so much so even like one time in high school that the one of the teachers had found themselves locked out of the room and i had actually made my own lockpicks because there was no internet where you could just go buy lockpicks and so i had actually made my own and uh opened the door to to get into the the room and help them out so it wasn't like a a thing that was like crazy hidden like it was it was a known thing like dj does these weird things uh and then you know like even the car stuff it was just like it was just like a problem thing to like for me of like uh it like i had this creativity i was trying to do stuff and i just didn't fit in to the traditional school system and the regular educational model and so i needed some way to be able to express those things and And so I was building a lot of science experiments of my own.
Starting point is 00:04:26 I was doing all this stuff. But I didn't really have someone to coach me through that. And so, you know, I just had to learn stuff on my own. And so that's kind of like where a lot of these shenanigans kind of came from. Is that where you met Mr. Knapp? Is you had some, you know, brush with the wall in the form of school administrators yeah so one of the commencement speeches that I was so fortunate to give was at Berkeley to the information school and got to tell the story about the person who actually suspended me
Starting point is 00:04:58 from high school and his name was Bill Knapp or is Bill Knapp and the special thing about Mr Bill Knapp, or is Bill Knapp. And the special thing about Mr. Knapp was he suspended me, but then he also put me in this elite program later on that was, you know, sort of the special, special leaders of the school who were really asked to help run a ropes course, which is like an obstacle course in the trees, to help kids and other people who are having a lot of trouble in their home life, academic life. And so how does a person who is one moment kicking your ass, by definition, you know, then turns around and selects you and changes your life? And so what does it look like if we all type took a shot on somebody, you know, who was really like somebody like when we think about is like, that's a pure person who is screwed up. And then you turn around and you say, you know what, they could be something special and you give them a second chance. And so what is it, if you think about all the people who took a risk on us and we ask, what if we just tried to take a risk on somebody else, how profound would that be? Like, what would that actually look like if we all took a risk on somebody else
Starting point is 00:06:18 to do this? Like you all both went to the Insight Program. Jake, who was starting the Insight Program, you know, he found a way to track me down and was sort of harassing me nonstop until I got involved. And I was taking a shot on Jake. Jake was taking a shot on you. You all are taking shots on other people to help support them. And I think it's this very special thing of what makes our system work is that we invest in people. We are willing to invest in the success of others. And when we lend our credibility to that, something special might happen. And so to bring this back to that story of Mr. Knapp was, and you can find all these commencement speeches on
Starting point is 00:06:58 LinkedIn and the internet, is I got to say thank you to Mr. Knapp. He was in the audience. And how rare is it that we get to say thank you to somebody that literally turned the trajectory of my life around? So one thing we were really curious about is how, on the other side, sort of receiving, being able to give him the chance, how do you go about making it easier for others to do take that risk on you right because in your case i can totally see it being like oh you have this kid right like who's very creative obviously very talented doing all these things but then maybe he's actually has like pretty high morals such that right like maybe you know he's carjacking
Starting point is 00:07:41 for shits and giggles but you can do a. I was a carjacking to be clear. I was just hot wiring. There's a big difference in those things, just for the record. And everyone got their car back. I see the statue of limitation that doesn't extend maybe as far for, sorry, bad joke, bad joke.
Starting point is 00:08:02 Thank you, thank you, thank you. Right, but like people who, you know, have the skill, same skill set that you had, like they could have done like a lot more in terms of right. Like getting more money. Right. We're actually going out of their way to like maybe hurt people, you know, or things like that. So, yeah. Like what's your thought on like how to make it easier for others to take the risk on you? There's a couple of things I think that are wrapped up in that. First is I think those of us that are in positions of privilege have the responsibility to look for talent, look for opportunities and sort of say, oh, that person here could go do something special.
Starting point is 00:08:40 Tell you another quick one is, you know, I was sitting next to this conference and it was this younger looking person and I was like, oh, how are you doing? And just was just chatting next to them. And, you know, LinkedIn was already a thing at this point and just kind of met this kid and he turns out he's 16 and he's like, oh, I'm working on these math problems. I'm like, oh, tell me about your math problems. And, you know, he starts telling me about what he's working on and then, you know, it's kind of an unconference.
Starting point is 00:09:10 So he's like, I'm like, well, why don't you have a session like on hard math problems that are interesting? And then I said, I'll come to that. That sounds cool. And so he's running this session as a 16-year-old on math problems and all this stuff. And we start to talk and I'm like, hey, maybe like you,
Starting point is 00:09:28 that sounds like you're really smart. You could do a lot of cool stuff. And he's like, how do I do that? And so then we realized like, well, why don't you come work with me for a bit? Why don't you do other stuff? Like we could intern at LinkedIn. Of course, this turns out to be Dylan Field
Starting point is 00:09:44 who went on to start Figma. And, you know, but it started with just asking somebody, why are you here? What's interesting to you? And then when he's passionate about something, he or she, they are passionate about something. You kind of just say, tell me more, find out more. And then sort of say, wouldn't it be cool to to maybe work on something together or tell me like let them inspire you i think if anything the people who i am perceived as taking a chance on are actually giving much more to me in many ways
Starting point is 00:10:19 i think teaching is like one of these amazing things, and it's like being a parent. It's like you get taught more than you feel like you're teaching. I feel like I've gotten the better end of the bargain a lot of times from this. And so now on the side of how can people, because people ask me all the time, how do you get a mentor? They say, will you be my mentor? I don't think mentorship quite works that way. I think mentorship works because you're like, hey, I got a problem. I'd love to get your opinion. I'd love to get your take. I'd love to run an idea by you. And then as you do that more and more, you're like, hey, you know what? Like something gets set up where it's more repetitive. And, you know, you sort of
Starting point is 00:10:59 check in with that person regularly. You kind of let them know where you're trying to go directionally in life. And you do have to be persistent. And you have to kind of just sometimes provide feedback and say, hey, here's what I'm working on. Here's the things I'm thinking about. And it sort of develops over over a long arc. And the people who've mentored me, and I've been able to consider, you know, father figures to friends, it's always been more almost organic than something where it's like, hey, here's like a natural way we do things. In this case, like when it comes to taking chances on people, I mean, Dylan Field is a great example here. Are there characteristics that you have kind of reflecting on it, realize that there is some spark that you see in a person that makes you go like, and sometimes it's not even, you can't put words to that. You could simply say, it's just intuition. I just feel like there is something here and you
Starting point is 00:11:47 take a risk. Is it more of that? Or do you realize that, well, there are some characteristics that just people have. Persistence could be one of them, which we'll talk about pretty soon, that you've come to realize that, yes, if you see that, you kind of feel more like, yep, this person needs a chance. Yeah. So I think I of the, you know, despite getting kicked out of my math class, you know, I, I do have my doctorate in mathematics, which is sort of one of the ironies of the world. And part, one of the beauties of being part of a math department is you see talent at a really young age. Like there's people who are really like they just have an extra gear and you're just like, wow, that's an incredibly special thinker, such a special kind of person and how they just look at the world. And so I'm looking for that, like somebody who's got just that.
Starting point is 00:12:38 Wow, that's a really clever way to look at the world and just sort of see that there's something more to it. And I feel like if anything, I've been incredibly lucky in my life to just be around so many of those people. And just, you know, a lot of times some of that is like seeing somebody who's passionate and just saying like, how do I take the roadblocks out of their way? You know, Jay Kreps was the first engineer that I inherited when I got to LinkedIn. And he was kind of on the outs with a lot of the other teams. And it's kind of like had these kind of wacky ideas. Everyone's like, I don't know, Jay's kind of difficult to work with. And I was just like, Oh, no, it seems like to me, that's a guy who doesn't, like, he's gonna run
Starting point is 00:13:21 through a wall. Like, how do we do stuff with that and then it was like okay how do i align him with the right problems how do i get the right team around him and then you know i got convinced chris riccomini who used to be an intern for me back at paypal to join linkedin and and uh you know partner them together and then that's that became the beginning of that idea of not only first Voldemort, which is a key value store, and then evolved when Neha Narkaday joined and then also hiring Jun Rao. That kind of combo became like, oh, maybe we have a whole new way of thinking about things. And that really is this idea of Kafka. And that turned into this idea of Kafka and that turned into conflict. But my job was to really say, great, you've got this idea. I am going to remove every hurdle from your path. And I'm going to run lead blocker to use that analogy to just allow you to get the stuff done and to try to make it help make you successful. At the same time though, right? Like there are times where you're removing all this roadblock, right? But they're not sort of getting to the point where you need them to be.
Starting point is 00:14:32 Do you think about it as almost like venture capital, like venture investing, where it's like, you just got to make a lot of bets, right? Even if one of them pays off, you have like a 10x, 100x engineer. And then, right, like you can do so much such that you don't have to feel bad about sort of all the other sort of time wasted so to speak right like on the other opportunities how do you think about that you know i guess i actually come from it from the other side which is more almost research and development approach of like how do you pick
Starting point is 00:15:02 research problems how do you run a lab? You know, I'm trained much more formally. Like I used to be at Los Alamos, you know, which is famous for creating so many things from GPS to obviously unlocking the secrets of the atom. And I look at it from that approach of a lab model. In the early days of LinkedIn, the way I ran the LinkedIn data team was much more like a lab. I was like, okay, you guys, we got these guys are the projects. These are like what we're trying to go for. Because we didn't know exactly how to turn it exactly into that product. At that point, there was like, there was sort of a 20 70 2010 ratio of 70% of kind of get the stuff that we knew we had some momentum, get it further 20%,
Starting point is 00:15:46 a little bit more on the adventurous side, and then 10% on like the dream kind of aspects. But I always look at it, there is a holistic thing. But I think in under each person, there is a portfolio of things are also working on. And so as things start to pop or things start to make sense across the net portfolio, I'm looking how to put those pieces together into a comprehensive portfolio. And by all means, there's been plenty of failures that I've had, most of which are my direct responsibility. And I've had many things where I've been, I've made colossal mistakes. But what I think what we've done really well collectively is look at is like, okay, how do we make a little bit of progress? How do we test this out? How do we try this? How do we make sure that we get there? You know, I got a lot of things, times people think it, look at like all these
Starting point is 00:16:42 ideas and they're like, like oh it was an overnight success people think like Kafka just went from this early idea to this big win right away they forget all the pain that was there they forget how much negotiation I had to do with the operations team to even allow this in you know it's like Bruno and teams like that and Henke to get them to like even be like okay give me this area this is a safe zone here's how we go like there was a lot of negotiation that to allow that to happen and then the team carried the rest of the way they delivered but it took a long arc figma took a long arc it's you know took a long time to develop linkedin took a long time to become a thing. So it's very easy in retrospect, to be like, yeah,
Starting point is 00:17:27 we nailed it. That was easy. And I'm like, every like nothing, all the good problems. They're good, because they're hard. They're really hard. And so I love to work on things that are like that. I'm not really good at things that are easy. That's like multi year overnight success, right? Yeah. So taking a step back, like about a decade ago, you posted this on LinkedIn, where you said the best advice you received was no rules, only guidelines. How did you, well, one, where did you get this advice?
Starting point is 00:18:02 And how do you think about applying it these days? Yeah, I don't, I got got it actually it developed in high school because i was constantly get like i was so frustrated and people were constantly telling me i couldn't do things or i was stupid or all like they just there was constantly like this pushback. And one of the things that I realized was, okay, these are guidelines. These aren't really rules or they're stupid rules. I'll give you an example. And it's kind of like a version of what's going on with chat GPT now. So in chemistry class, we had to type out all the lab assignments. And it was like, you take what was literally in this lab book and retype out all the things and i thought this is a colossal waste of time obviously they wanted to do it so we
Starting point is 00:18:49 weren't like you know going to screw up the lab and hurt somebody like but i was like really type this out so you know this is a time when optical character recognition ocr had just come about and i was like well i'm just gonna ocr this thing and i'm just gonna change a few words and i'm gonna like submit it and like they were like how because then i could go like i was done now what was i doing well then i was i was like doing a ton of coding at the system level uh you know this was like an early apple lab that our school had and so i was like directly hacking into the the os layer to code to do this coding stuff so it wasn't like i was just like oh i'm just out messing around i was very focused but i was like this other thing's a waste of time so i
Starting point is 00:19:37 did this and then suddenly like you know they're like they we don't know how to dj's doing something we don't know what he's doing we don't like it without ever asking why is he doing it this way what else is he doing i think a person who has been really excellent would have been like well this is stupid like he's clearly not he's doing this he's cheating because he needs to go spend his time doing something harder. Shouldn't we just let him do something harder? And so I actually ended up doing really bad in the class, but not because of me, because they just were trying to basically pin it on me. And I was like, well, I clearly know the material. I know all this stuff. And so I'm being penalized and they're going to let me graduate because of this.
Starting point is 00:20:23 Oh, wow. And so I was like, well, this is a guidelines. These aren't really rules. And I was just persistent enough that the school administrator was like, bless her heart, was like, she was just like, this is stupid too. She's like, this is incredibly stupid. She's like, I'm just going to change your grade now. Oh, wow. She's like, you don't have a, you know, I think an F or a D.
Starting point is 00:20:44 She's like, you can have a C or a B minus or something. And then that allowed me to graduate. But so I think about those things. And I think about that, like entrepreneurship is many times the same way. These aren't rules. These are guidelines. Now, I do want to be very clear. There are also, along with this, there are ethics and there are morals.
Starting point is 00:21:06 And so just because you say there's no rules doesn't mean that you should be, gives you carte blanche to go harm people, do crazy stuff, to commit fraud, to, you know, I think about vaping and like some of the early investments in that, or some of the things that we've seen on other fronts. Those are some of the issue sets that are really problematic on that front. So you have to have the ethics that combine with the sense. You have to have your true north to do this correctly. And people that I really look up to as a model, there's like one set of people of genre style that I look up to is how they've actually conducted themselves in life and done things
Starting point is 00:21:51 that are creative I think about Richard Feynman you know the great physicist and and sort of like sure was he a he also he was very good at picking, but also like having fun and creating all this chaos. And like the person, one of my professors who was my PhD advisor, Jim York, he came up with like all sorts of crazy, wacky ideas. And people were always pissed at him. And, you know, especially like, what should a student learn? And I remember going around and they were so grumpy because they were like, well, you know, students have to have this very formal learning. And he's like, there's formal learning and there's life learning. And like, what does it take to actually execute in life?
Starting point is 00:22:33 So he's like, how are you going to know to code something? How are you going to be able to communicate? How are you going to do like all these things that are, you know, are 101 for getting promoted in a job he was already thinking about. And so I actually ran an experiment. I went around the whole department at University of Maryland, and I asked as many professors as I could, what does a mathematician need to know? And I wrote all those answers down, and I was like, oh, okay.
Starting point is 00:22:56 So there's not a general, there's like some agreement. I must know those things. There's a bunch of smattering things. I should be familiar with those things. And then there's this sort of weird set that somebody is like, you really need to know this, but no one's telling you this. I was like, I got to go get those things, like how to communicate, how to be able to write a research grant, all these other pieces. And I think that's true in business. If you went around and you went around to all the executives and all people like, what should I really know? You'd get the same kind of assessment.
Starting point is 00:23:25 And that's the stuff that you really, those are the things, the places that you should apply your directional intent to get smarter and more effective on. This is why, by the way, this is why David Henke and I got in so much trouble and had so much fun together. Because we both had this mindset, by the way.
Starting point is 00:23:43 We were like, like oh that's what everyone's saying we're just gonna go do this we're like this is how we're gonna roll and i remember david and i would be like like you want to roll on this together we're like yeah let's go do it people are gonna be pissed at us oh yeah and then we're like let's go do it i'm starting to see it now i think it's really cool that your emphasis on this combination of both this no rules sort of approach, but with strong ethics. Because obviously without it, bad things happen. But then how did you develop this sort of sense of conduct? Morals and the ethics?
Starting point is 00:24:20 Sorry, yeah, yeah, yeah. Yeah, so I think one, I was very fortunate to grow up in an environment where I had my grandparents around and my grandparents, you know, grew up in the, you know, they saw world war two, they saw the depression there. We were all immigrants, myself included in an immigrant to, to, to this amazing country. And so there was a different value system that was instilled of what do we value, what is important, a sense of what is responsibility to communities, to self, to family, and betterment of ourselves. And so that stayed very true to me and grounding. I was exceptionally lucky in undergrad to get a solid dose of ethics training and formal training in ethics. And one of the things that, you know,
Starting point is 00:25:16 both in classroom as well as with outside lecture series, and just got exposed to Karl Popper, who's, you know, the sort of philosophy of science, really all these philosophers. And it was a world where I was like, oh, this is like left brain meets right brain in a different way. And so it really was foundational for me to understand what does it mean and why are we working on the problems we are i also got i was very fortunate you know i was a theater equivalent like we had these things called programs of
Starting point is 00:25:51 concentration at university california and so one of mine was theater and the other was psych and so in theater i took this class where i was like i'm going to take things where i know nothing about so there's a class in there called gay chicano theater and i was like, I'm going to take things where I know nothing about. So there's a class in there called Gay Chicano Theater. And I was like, okay, I don't have any friends who are gay. I don't know what this word Chicano means. And I really don't know anything about theater. So I'm going to take this class. And it opened my eyes so spectacularly because I just didn't know or appreciate the problems in the LGBT plusg plus community as we call it now and i did not understand
Starting point is 00:26:25 the value the challenges and all the things that have been faced by people who have been of hispanic origin and moved you know this are here in california that we're often referred to as chicano chicana latina latinx kind of movements. And what that effort, what they go through, the farm workers, how important what they are to our society, everything else. And that collective made me so open my eyes in a way that I just couldn't appreciate. And I think of many times as technologists, we really suck at this. We really suck at like understanding who our members are, our customers are. Could be an enterprise, could be a consumer, but like even in building devoted health, one of the most important things for me in building devoted was to not be in front of the computer. It was to be in front of the human that we're
Starting point is 00:27:27 taking care of, being in their living room, not just, you know, in some clinical environment, understanding what they've walked through, what their life is like, what challenges they face. And this is where you realize, like, a lot of our thinking as technologists is off the mark. Because a lot of the homes, like, I off the mark because a lot of the homes like i'd visit and do other things you're like everyone's like smart wearables you're like there's no internet here like people are talking about putting people on these ketogenic diets in one of the neighborhoods like where we were taking care of people we're thinking about taking care of people and so i was on a ketogenic diet for 110 days in that neighborhood while trying to be out there for a few days a week this is in florida and i was like there's zero chance zero chance you could put somebody on a
Starting point is 00:28:11 low-carb diet in that area because of cultural aspects and so if you're not living it how are you supposed to really understand it and this was even true for me in the government you know like people were talking about criminal justice reform until I really sat down with people who had been incarcerated. I didn't understand or appreciate the problems. In this case, is it about like for everyone within technology and even otherwise to develop the sense of what you're building, who you're building for trying to put yourselves in their shoes, trying to understand where the other kind of your quote unquote customers are coming from? Would you say doing this more outreach, spending more time with them, even for like frontline engineers in a company, that is something that they should
Starting point is 00:28:54 seek out and do instead of just, I'm just going to sit in front of my screen and write code. Yeah. So I think one of the classic, and this is one of the values of the u.s digital service is it's not building for it's building with it's it's a we we're building this together we're coming along on the journey together this is one of the most important lessons that president obama taught me when we were building precision medicine initiative this is like creation of the largest genomic repository largest collection of medical data, all brought together to cure diseases. And he said, you have, you got to have the people at the table who are going to be really impacted by this. And so, you know, I had different groups like, you know, cancer associations and
Starting point is 00:29:35 different kind of chronic care groups, medical groups, others, data groups. He's like, where's the real people? And, you know, that's not a fun message to get when it's the president. You don't want to, your days are going to suck when you're getting chewed out by the president. Like, it's like, by definition, because everyone's like, ooh. But everyone else is like, you know, everyone's like, no one's going to be like, damn. And so, but, you know, like when I got into the place and we the place and I got to really meet with the people, I thought I was like, I get it. We're doing this wrong. We're doing this wrong. I got to reframe this. I need to really radically rethink how we're actually getting people involved because it's not
Starting point is 00:30:21 our data. It's their data in this case and we're putting it together we have a right responsibility of stewardship and this is what i think you know there is this classic tension people are probably thinking about is like well customer doesn't always know what there's needed but if you spend time with them and you're like really building with them you'll see it you'll get it so there's you'll see it really closely so there is another aspect of this like you said customer doesn't really know what they want. For example, I forget if this was Steve Jobs or who. If you ask the customer what they want, they wouldn't want the iPhone that they created at the time.
Starting point is 00:30:56 And the other thing that you hear a lot in Silicon Valley is, well, you build it and they will come. So how do you balance this where you kind of want to make sure that you're building the right thing for the right people, but at the same time, sometimes they aren't always sure what they need. Yeah. So the first is, I think people take that as like, you know, the iPhone comment and people don't know it. And they're just like, and then here's the magical thing that they want to go, that people are going to want. And you're like, there's no evidence for that. And so you need to actually build a pathway to get there. It wasn't like the iPhone just was there on day one. You know, they had obviously built and failed on the Newton.
Starting point is 00:31:36 They tried a whole bunch of other things. And the migration was there. Now, they took a lot of very bold choices. And they were like, here's our bet. Here's what we think. But that's a design process that they had earned the right to. And that's one of the things is like, a lot of times I think people are like, if you build it, they will come. No, they won't. That is just not true. And most of these products, if you look, you know, this is a classic, you go to the lobby of one of these places, they'll show you like, here's V1.
Starting point is 00:32:06 And you're like, oh, my gosh, that was terrible. Like, who would use that? And you realize like you iterated into it. You know, the social network sites of today versus what they were on day one, radically different. Even Twitter, like it was just like originally short code kind of pushes. So I think that's very similar to even basic engineering technology. You know, nobody needed like spectacular, sophisticated, you know, tools or concepts that we have today until we were like, yeah, that sucks. I don't have a way to move data around,
Starting point is 00:32:38 or I don't have a way to look this up, or I don't have a way to debug this or data observability or all these other pieces. And then you're like, yes, we need that. And then people are like, oh, that's the way to approach it. And people can debate that. There's very vicious debates about, is this the right way to approach that? And that's when you get to actually test it out and see which is the right approach. Sounds like a combination of fail fast and don't prematurely optimize. Yeah, it's a fail fast. And it's like what I'm trying to do in the creation or building something. It's always about velocity being first derivative positive, which is velocity.
Starting point is 00:33:15 Second derivative positive, which is acceleration. And so I remember this even in the early days of LinkedIn. Everyone was like, we're in like such like we're, like we're in like number six place or something. People were always like, we're not. And there was another competitor in Europe that people were like, look at how fast they're going. They're doing all this stuff. It's like, our job isn't to win on this lap.
Starting point is 00:33:37 Our job is to win on the 15th lap, maybe the 30th lap. But like, we are supposed to get faster and better every time. And we know that they're going to, we're going to, we're going to win. Muhammad Ali has, has a great saying, which also is like, I never fought, I never won a fight in the ring. I always won the fight in the preparation. Yeah, that's really well said. So talking about velocity, and also you mentioned White House, for folks who I think everyone would know, but for folks who might not, you were the first U.S. chief data scientist. And most importantly, not the last. Yes.
Starting point is 00:34:15 Because if you're the first and the last, then you screwed up. Well, you said the right trend there. We're definitely seeing more U.S. chiefs. And in fact, chief data scientist not just in the White House, but different federal offices across the country, which is a real win here. By law, there are chief data scientists or chief data officers in every federal administration office. And now by executive order, a combo of the AI person as well. Oh, didn't know that. That's pretty cool. So on your Twitter account, I think you have this note pinned right now. If folks who haven't looked at it, I would highly encourage to go read that. It's on a
Starting point is 00:34:51 White House notepad. It says, I'll just read the first lines that you have here, which is dream in years, plan in months, evaluate in weeks, ship daily. And then you have two other sections. I'm just going to read them. It's amazing. Prototype for 1x, build for 10x, engineer for 100x. And then you have what it's going to take to reduce the time in half and what do we need to double the impact. This is incredible. I've gone back and looked at this myself many times. I'm not doing anything close to what you were doing in the White House, but still it helps me think about problems. How did this come about? Yeah. So one, anybody can have the card because all everyone's card, it's, you know, stuff produced by the White House is, it's a people's house. So that's the card is, is everybody's. The way it came about, I was very fortunate
Starting point is 00:35:39 to give a commencement speech at University of California, San Diego about this. But the thing is, we were in a time in the White House, a very contentious time. We'd seen the murder of a number of innocent black youth, people of color. And there was a question of what's the policing that we would want in our neighborhoods? What's the policing we expect? We need to also understand that policing is a very dangerous job also. There are many police officers who are harmed and killed in the line of duty doing their job, trying to protect people. And so how do we balance this? How do we make this work? And so on our team, we had Lynn Overman, and actually this was Denise Ross, who was the second U.S. chief data scientist, Clarence Wardell, and a number of others had this idea of like, well, let's get a convening together of great people.
Starting point is 00:36:32 And Lynn Overman kind of helped steer this with a number of others. And they said, let's get the police chiefs, let's get the technologists, let's get civic activists all in a room, and we're going to lock them in a room with White House cupcakes until they come up with an idea. And when you're in a room, that's a very polarized, very intense meeting. And so how do you get it moving forward? And what I wasn't sure what to do and what to say when you get in front of these people who are looking at you trying to decide, like, do we trust you? How are we going to, can we work with you? And so I was trying to ask, like, how do we make some progress here? What's the lessons that we have had to actually make progress? And I think this is lessons that we've learned in building new initiatives, startups, new ideas. And those are the exact things that came to my mind.
Starting point is 00:37:20 Those three sort of snippets of blocks of like, i was starting from scratch what would i want to do and it's like every day we got to make some progress because days are gonna suck there are some days are gonna suck but if we just get one thing down we get one something we ship something every day we're gonna be going long we're gonna do something amazing but we've got to be dreaming in those years so we got to got to have true north. Then we've got to ask ourselves, okay, what's the right, like, let's not get hung up on big timelines. Let's prototype a lot. And then let's build for, you know, let's start experimenting, get a little bit bigger, and then finally engineer it to 100x. And then there's a hack.
Starting point is 00:38:01 Like, if you ask, like, what's it take to double the impact while cutting the timeline in half? We're on log base two. And so everything you prioritize now is exponential. And so the things that are linear drop or they're below the line. They're not interesting. And everything that's going to be that are going to have the big impact, those are the things you're going to go for. And so like if you think about it, we have an amazing country. We have a huge country.
Starting point is 00:38:32 We're tiny relative to many of the other countries out there. So if you're going to get to several hundred million people and you start with 10,000 people, we got to be on an exponential trajectory real fast. And so how do we start taking those pieces? And how do we make this a we collectively? Because at the end, when we left the administration, these programs covered 94 million Americans, starting from zero. And, you know, a lot of times people attribute this to the White House. It's all the people across the country who came together to do this. People who decided this is how I want my community to work, how I could do it. And what we ended up being is a catalyst, a glue.
Starting point is 00:39:14 And then saying, here's what we collectively can dream for. This is what we should expect from our social contracts with each other. And let's push to be better. And then there's a bunch of things, just like the lab model's just i need to help get congress on board i need to get state legislatures on board i need to get all these other i need to get society on board around certain ideas then we can go after these so many things how did you get like these people like you said you had to gain their trust right it's very hard to sell people we say hey we're going to be making incremental updates right without sort of painting that picture.
Starting point is 00:39:45 How did you go about doing that? Honestly, it's just getting people to come and show up together with an openness. And if you can get people to show up with a degree of openness to sit together and say, look, there's a huge divide. Are there things that we could get there that we could do together? Are there potential things to get there? There's surprising amounts we all agree on. We all agree that our kids should be able to go to school without being bullied. We should agree that kids should come home safe.
Starting point is 00:40:19 We should have them have food. Now, how do we accomplish those things is another thing, and that there's a lot of disagreement there. So what we said is, okay, if we're first is, we know that technology can have leverage. What if we just agreed, like there's a certain number of data sets that we could put out there on the web for everybody to look at, to compare, like maybe we'll discover some great ridiculous insights. And that's what, that's how data, the data-driven initiative did, sorry, police data initiative started. And then Morf also added that police data, data-driven justice initiative as well. But one of the things,
Starting point is 00:40:58 the other things that we do is something that we call scout and scale. Great ideas don't have to just start from, you know, your own brainstorming for a whiteboard. You look across the entire landscape of whatever you're working on, healthcare, education, technology, whatever it is. Somebody's tried something. Somebody's done something that's pretty spectacular. And then you kind of look at it and you go, huh, that's pretty cool. And you see somebody else who's done something slightly different. You go, that's pretty cool too. What would it take to scale that much more efficiently? Is there what are the common threads? What are the things we could do that could collectively lift things that can make that happen? third USCTO. This is a great framing for how to look at stuff and say, okay, this is how we do it. I'll give you an example of this concretely, early LinkedIn. This guy showed up to a friend
Starting point is 00:41:51 of Jonathan Goldman's, hedge fund manager shows up, kind of 2008 happened, downturn. He's coming to hang out with lunch for us, but he's always on these crazy fasting diets. And he's like, yeah, well, I'm trying to just, just you know record these videos for for math teaching math and you know just doing all this kind of stuff and i was like that's pretty cool and we were just talking through and he's just kind of doing all this stuff and i was like through some ideas he's kind of carrying it jonathan is helping him a bazillion more than i i i was um you know i was just give them a few thoughts only, Jonathan, and really was kind of doing the heavy lifting. That person is Saul Kahn, who then went on to create Kahn Academy.
Starting point is 00:42:31 And so it's just like, like, you're just like, oh, well, I saw some other people doing this at MIT and at X, I'd done a bunch of this work and, you know, trying to help rebuild the educational system in Iraq. And so it's kind of had some exposure to these issues. But you're like, oh, well, yeah, you should do this. Like, and then they take off with the idea. Like, how do you think about like spending, like allocating resources to do that? Like, is that someone like you kind of dedicate who is like very good at being able to kind of spot different trends? Or is that like something that you like, you know know have everybody have some kind of time allocation to uh to do i think it's all of our
Starting point is 00:43:10 jobs it's it's it's honestly all of our jobs to to do it denise ross who's that second uscto clarence wardell they were both originally uh presidential innovation fellows at the department of energy and they were supposed to be working on all this energy related things and somehow they found a way collectively over to me and they were like we got to work on this policing issues i was like great what do you think and they had like all these ideas and they're sharing what's learning and i was like awesome it's like how do i co-op them and steal them from the department of energy to work on this problem. Because they had, think about ideas I think about in terms of momentum. What is momentum? Momentum definitionally is mass times velocity, right? Mass you can think of as people or dollars. Velocity is a vector. So which way is it pointed? How far is the scaler and so you get kind of three things that
Starting point is 00:44:06 you get to play with there and so i'm like can i add more people to this can i add more dollars to this can i change the direction can i see how that scaler's gone where is it so far and then that gives me the ability to say this is the range of where the momentum could go. So if I'm starting from absolutely zero, absolutely zero, that's really hard. People think like, oh, Tesla started from absolutely zero. That's not true at all. Elon's not a founder of Tesla. There's a bunch of other people who are playing around with different ideas. People think like self-driving cars started from zero.
Starting point is 00:44:40 No, they forget the zero actually is a U.S. federal government. DARPA funded that. People think about the COVID actually is a u.s federal government darpa funded that people think about the covid vaccines rna vaccines funded by the federal government that is the spark and then the rest of the money the dollars of people that is funded that ecosystem comes together to as a in that analogy from a spark to nurture the flame And so I think it's on all of us to find and to foster that and to carry those ideas and then champion them into something novel or new. And sometimes that may be,
Starting point is 00:45:14 look, you can't do it inside your organization and it's time to go spin it out and do something. Like you guys decided to do this crazy podcast idea. Probably you're like, hey, well, I don't know we're doing this thing. And then you're like, hey, well, I don't know. We're doing this thing. And then you're like, well, maybe we should do it. And then finally you probably wrote something down.
Starting point is 00:45:32 And you're like, fine, we just got to send an invite and then figure out how to do it. And then it happens. And then you do another one. I lost the bet. That's how I ended up here. No, that's true. Right? But that's where fun happens. Staying in the lane doesn't help.
Starting point is 00:45:43 Totally agreed. There's no rules. There's no rules. There's no rules, only guidelines. So talking about momentum, by the way, we have a ton more questions to ask about your time in the White House, but we're going to fast forward a little bit. We're talking about momentum. At this point, it would be,
Starting point is 00:46:00 we wouldn't do justice to the topic if we didn't talk about it. Like AI, for example, there's a lot of momentum in that space. And about a year back, I think you joined Great Point Ventures and you're doing more active investing. I mean, you've been investing even before, but now you're doing it a little more actively. So looking at the momentum that you see right now, you see a lot of toy applications on the AI side.
Starting point is 00:46:22 At least that's the perception I have as a user. I use Copilot for code, super helpful tool, probably one of the best tools that exists out there in terms of the application of it. But apart from that, you see a bunch of toy things. So while there's a lot of momentum, there is also a lot of hype. Curious to know from your perspective, what are the things that you see which are underhyped and you're excited about? Well, I think it's useful to first rewind a little bit further back. Like, you know, I got super lucky. My dad was a professor.
Starting point is 00:46:54 My uncle's a professor. My uncle's actually a professor, one of the early professors of AI. And so while I was being babysat by some of the original AI luminaries. And so my exposure to AI has been going on since, like, I don't know, since before I can even remember. And so I've been very privileged to have a front row seat to many of the early innovations all the way through. In fact, early days, we thought about Monica, actually. Monica Rigotti thought about saying,
Starting point is 00:47:24 hey, we should you know maybe name the team artificial intelligence at linkedin i was like no absolutely not i was like we are still in the winter we're not and then and for people who are interested if you go to the linkedin learning you can there's a whole series of of kind of equivalent podcast courses where i interview people who've been really on that journey through the arc of data that you can kind of equivalent podcast courses where I interview people who've been really on that journey through the arc of data that you can, including Monica Rigotti, Jonathan Goldman, some of the others we've met, we've talked about. But the part that for AI that's there that I think we've seen is like machine learning, all these other pieces come in.
Starting point is 00:48:01 When we were building out the first national strategy for AI, really led by the president, one of the key things that we realized in that process is that we had to figure out how to really craft that agenda for the country around what AI was going to look like. And what that looked like is, one, talking to lots of people across the country, seeing what fits, how do we put all this together, and then formulating it in a way that was actionable. And this is a time where the country was really focused on generalized artificial intelligence, and we needed to be more focused. And so like, if we look at that, what did we get right? What did we get wrong? The first thing that we got wrong is you have to think about this in terms of automation,
Starting point is 00:48:49 like the overall automation aspect and not just AI. Second thing is what we got wrong is originally we thought, oh, it's going to be the displacement of truck drivers and short cooks, like people flipping hamburgers. It turns out dexterity is hard. Dexterity is really hard. The self-driving, not easy. It's like all the stuff comes together really hard in the current existing frameworks of system. What I think we've learned now is we've seen with transformers, this is just the next phase, but there's several more increments that need to happen. And computer vision is showing stunning results and there's so much more. But what I think is where we think that some of the
Starting point is 00:49:29 greatest change I think is going to happen is in places what I'd call stupid, boring problems. And these are problems where you're just like, why is that? Why do I have to do this? Like, why can't this thing just be filled out? Why am I just handing this piece of paper to this person to fill out something and go here and here? And if you kind of look around your life, there's lots of stupid, boring problems. You know, you're like, why am I cut and pasting this? Why do I fill out this doctor's form 15 times with all this stuff? There's a bunch of stuff that should just be auto-populated, those type of things.
Starting point is 00:49:59 And so those stupid, boring problems are the ones where I think we'll see the biggest lift for AI. And then that's going to turn it into a place where we're also, when you get rid of stupid, boring problems, you're also, your safeguards for what you're doing ethically, morally, transparently are much more clear cut. Because everyone's like, yeah, get rid of that problem. I don't like that. And you have much more grace in solving that problem without causing harm. And then you can evolve to some of the bigger problems. I think right now what we have is AI is a lot of features. They feel like features rather than full-blown products.
Starting point is 00:50:34 And so we're not sure exactly what to do. Now, obviously, they're getting integrated more and more, and it'll be interwoven. But there's a lot more that I'm very excited about. And I would say, frankly, if I'm thinking about some of the areas that I'm particularly excited about is, I think, opportunities for health care, life sciences. Think about it for addressing issues around mental health, loneliness. I think about it for reducing medical errors, catching medical errors, other issues. Helping people understand and take control of their own health care and be a better advocate. I think about it in terms of educational opportunities. I think
Starting point is 00:51:11 about it in terms of finding potential new scientific breakthroughs. We just discovered an increasing number of organic or inorganic compounds, like crystalline compounds, by an order of magnitude than we've ever had before. What might that do for new materials and other exciting things? That's a phenomenal jump of something that could be exciting for us. So you mentioned like healthcare and life sciences. One of the things which is super important in these domains is precision and accuracy. I mean, for example, if we use AI for something like mental health, for example, guiding people how they can get better care. But if you see right now, a lot of these LLMs, for example, if we use AI for something like mental health, for example, guiding people how they can get better care. But if you see right now, a lot of these LLMs, for example, they hallucinate.
Starting point is 00:51:50 They are not necessarily accurate. So when it comes to practical applications in domains where the margin of error is really small and the cost of getting something wrong is really high, what version do you see of this technology evolving into to actually help in these domains? Because it wouldn't be the hallucination that we see today. Right. So I don't think we're ready to insert LLMs directly as the replacement for a human. I think of it as an augmentation. Take, for example, something called crisis. There's an organization called Crisis Text Line. If're you know in crisis thinking about
Starting point is 00:52:25 suicide or any serious issues you text 74741 you're going to get connected to a volunteer you know very fast and same if you text to the national suicide hotline number 988 you're going to get connected to actually crisis text line or the Trevor Project. These organizations, what they do then is they have a huge dilution of information coming at them. So it's not to use AI to say, hey, you're okay. That's not the point. What you're doing is you're using that technology to triage and prioritize. So sorting, organizing it. And so there's certain things, like if you walk across the Golden Gate Bridge and you see you need help signs, those are going to go directly to Crisis Text Line.
Starting point is 00:53:08 Now they know they have a special short code because they know that right away you're going to the top of the queue because you're on a bridge. That's important. But they also know things like if you use certain words in combination that they need to escalate this right away. That you need to say, hey, this person, like if they use the word Tylenol, Advil, there's a high probability they're actually already overdosed. If somebody's a veteran and they say they have a firearm in their home, they probably have the firearm out with them. There's a high probability of that. So you need to change your paradigm for how you organize. So we don't think of everything as a uniform distribution or some kind of traditional bell curve.
Starting point is 00:53:46 We think of we actually segments more intelligently. And that's where AI can really start to help us in the example is like, let us be smarter. Let's catch the edge cases. Let's be more effective at speed and time to help a person actually deliver the right care. Same thing on clinical notes for a patient. A physician, you're sitting in the room, sitting there in your underwear, and you're like, I'm cold, how long are they going to be here?
Starting point is 00:54:18 They're outside. You can hear them flipping through the chart, and you're like, are they reading actually the whole thing? You want it summarized a lot of times. Can you summarize it in a way that's useful? Summarization is going to be much lower, but also you can back it up with also references or other type things that say, hey, here's where it is in the chart
Starting point is 00:54:36 or other type of information. That's when I think we start to see the benefits of these systems. But do I think it's like ready for the clinical care setting? Not yet. Are we on a route to there? I think we're there. We're getting there. But like, for example,
Starting point is 00:54:51 identifying people who are potential for chronic kidney disease, we should totally be using that, those systems right now to identify it. For people who might have a higher rate of sepsis, should we use it to identify populations that might be more apt for of sepsis? Should we use it to identify populations that might be more apt for some kind of harm? Absolutely. Let's help them out. And at the same time, as we've seen, unfortunately, with certain governments, authoritarian governments,
Starting point is 00:55:16 is we need to figure out what, because that technology can be equally used to identify groups that you want to harm. And we need to figure out what those are. And that's not just here in the United States. That is an international issue of how do we actually make sure this technology is used in a way that works for us rather than against us. And us is all of us. So like with great technology comes great power and with great power comes great responsibility. So as you mentioned that this technology can be used to do a lot of good plus could be used in negative ways too. Right. What are some of the aspects that concern you
Starting point is 00:55:51 with the advances being made, but not having a whole lot of guardrails around it? So there's a couple. The first is, I think it comes down to us recognizing if you think about the people who are going to work, the greatest population that's going to work on AI, that's in front of us. greatest population that's going to work on AI, that's in front of us. The people who are going to work on data, that's in front of us. I think there's going to be over 900 students this year who are going to graduate the data science major from Berkeley.
Starting point is 00:56:17 And I guess I can say it here because it's now public, but I'm excited. I will actually, I have the privilege and the honor to give their commencement speech. And so I guess even Zoom here thinks it's a good thing. So double thumbs up, thank you. So, you know, part of this is, if we think about what do they need to make sure to shepherd this world in a way that we would want. Number one, I think we want every student to be trained with a good foundation of ethics. Two is like, the ethics need to be integrated into the curriculum.
Starting point is 00:56:53 It can't be some like side class that you're barely paying attention to. It needs to be integrated. Second is like, you think about security and you hear about your healthcare data constantly being stolen. It's unbelievable to me that we would teach somebody how to design a database today, but we wouldn't tell them, we wouldn't teach them. Like, you know, like when you
Starting point is 00:57:10 submit your test, like you submit your database homework to say like, here's my database. It doesn't actually try to do, you know, injection attacks or something else. Like, shouldn't it, like, shouldn't we be doing those things? Cause like it's, if we're not thinking about those basic security, it's basically it'd be like saying equivalent of saying let's build a design a class to build bridges and then not stress test it to see if it falls under certain conditions. loads. And so I think we need to have that as one of our predominant ways of really thinking about what do students really need to keep on the bright side of this. Next thing I think is there is you have to have diversity in teams. You cannot build in this day and age or for this world if you don't have people on the team that look like the rest of the world and have experiences from the rest of the world and i got to see this firsthand like u.s is a very big place many cultures many different aspects many lifestyles lots of things and when you look around that
Starting point is 00:58:16 morning staff meeting room and you're like trying to solve these hard problems you are grateful that there are people that come from so many different walks of life because you can't solve a problem for the country if you don't have representation from all the different type of populations that are out there. Makes sense. So you've spent time in technology for so many years and you've also spent time, I would say, in policy to an extent working in the White House. One of the debates that you see about AI is like, should this be more regulated as a domain? And then you see the other side of it, which is, well, if you regulate technology, you slow down the progress and this technology can do so much good. So why slow it down? What's your take on this? I think the question, you know, is appropriate is when does regulation required?
Starting point is 00:59:03 And I think there's a couple of ways to think about like first why does the fda exist the fda exists because it turned out for a long time it wasn't illegal to poison people in medicine and there was this kind of snake oil salesmen who were producing all this stuff and they poisoned a lot of people and it turned out they were it's like oh wait you mean medicine people are medicine supplies aren't safe that's not cool we're not okay with that you think about like why do we have the usda it's because people produce meat that he had e coli and all this like like you know salmonella outbreaks and all these other things and you're like that's not cool we don't like that and that's when you put regulation in department transportation regulates
Starting point is 00:59:49 things because you're like hey you know we do want to have rules around our roads to make sure when i get on the road like the probability of an accident is not ridiculously high like i want those things so we often put these regulations into practice because somebody's done something bad. The reason the SEC and Treasury look the way they are is because giant market crashes like that caused a recession. And so what we're trying to what we're seeing now is the beginnings of how AI is transforming society. And we have to ask, are we okay with that? And what are those regulations in there? Regulations start to come in is if we don't do a good job being responsible with technology. And so far, you know, myself point this at myself and spend a lot
Starting point is 01:00:36 of time working on this is like, if you look at the technologies we've built, they haven't really benefited everybody. There's been a lot of harms on this. A lot of health issues, mental health issues. Girls on Instagram is one that's often highlighted. But there's so many more, including disinformation, misinformation happening. And so we've looked at that, and that's people using propaganda foment massive distrust around the world, destabilize people's lifestyles, cause direct harm to certain populations. And so what's our answer to that? If our answer to that is like, eh, sorry, it's just we built the technologies,
Starting point is 01:01:12 that's not really responsible. That's not being a good citizen of this. And that's when regulation has to step in. There's another side of regulation that also is important to address is you can create what's called regulatory capture. You can create regulatory states in such a way that only certain people can play because they're big enough or they have certain rules. And then there's no chance for innovation. And that sucks too.
Starting point is 01:01:36 And so finding that right balance with a technology that is so rapidly changing is really important. I'll be very specific as an example. If you think about the human genome and some of the first test cases that were out there where people are trying to patent the genome, think about how different our world would be with a potential of genomic therapy. If that, like we hadn't gotten, like people had just been able to say, okay, it's a land grab and we patent this, this part of the genome and you don't get to work on it unless you pay us an insane amount of money. A lot of the therapies that are starting
Starting point is 01:02:09 to emerge right now would be off the table. So we need to find that right balance of these things, because at the same time right now, a lot of those drugs that are coming out are going to be prohibitively expensive. So, well, we are reaching towards the end of the conversation. We would love to continue chatting for way more, but we'll try to wrap this up. And before we let you go, well, we are reaching towards the end of the conversation. We would love to continue chatting for way more, but we will try to wrap this up. And before we let you go, DJ, we have one last question. So you've mentioned in one of the commencement speeches as well that failure is the only option. And is there a favorite failure of yours? It could be from school.
Starting point is 01:02:40 It could be from any of the jobs you've had, White House or even now. I have failed so many times. It's epic. I mean, I got suspended, which means I got caught, which means I screwed up, right? But it's also, it's what you learn from each of the failures, it's how fast you get back up. What I do allow myself when I fail is I give myself 15 minutes to cry, 15 minutes to cry. And then it's like, what am I going to do next? And people forget, you know, after I left LinkedIn, I had a colossal failure on a startup and a $41 million failure, a photo sharing app startup that, you know, that I got involved into, I thought it could be really spectacular. And it was a dismal failure.
Starting point is 01:03:24 But what I was really fortunate enough to do was I had worked with so many people and helped solve so many problems for other people. When I failed, most people said, you're dead in Silicon Valley. People were like, oh, you add value. You helped us on something. We will help you. We still see. And they helped me get back up. They helped me kind of take another shot at the thing. And so what I would really emphasize here is not about the actual failing. It's the process after the failing so that you don't not only repeat the mistakes, but that you're smarter so you can really then deliver something of even greater value and try something even harder that is going to potentially change the world in a far more dramatic manner. Well said.
Starting point is 01:04:14 That's a very good point to end on. Thank you so much. Thank you so much, DJ. That's a great place to end. Well, it's perfect because the computer is about to shut off. I've got a low battery. So you can't time it better. Perfect timing.
Starting point is 01:04:28 Well, Vijay, thank you so much for being here with us today. Loved the conversation. Well, thank you so much again, Vijay. All right, guys. Sounds good. Thanks. Bye. Hey, thank you so much for listening to the show. You can subscribe wherever you get your podcasts and learn more about us at softwaremisadventures.com.
Starting point is 01:04:51 You can also write to us at hello at softwaremisadventures.com. We would love to hear from you. Until next time, take care.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.