No Priors: Artificial Intelligence | Technology | Startups - If DNA is Code, Can AI Help Write It? Scaling Cell Programming and Synthetic Biology, with Ginkgo Bioworks Co-founder and CEO Jason Kelly

Episode Date: September 28, 2023

Ginkgo Bioworks is using DNA as code to digitize the cell programming revolution. Ginkgo is using AI and synthetic biology to keep the next pandemic at bay, and accelerate our production capabilities ...for medicine, food, and agriculture. Ginkgo’s co-founder and CEO Jason Kelly joins hosts Sarah Guo and Elad Gil to discuss bioengineering protein as a foundational model, specialized data learning from an evolutionary perspective, what we need to prepare for a future pandemic, and more. Jason has served as a member of our board of directors since Ginkgo’s founding in 2008. He has also served as a director of CM Life Sciences II Inc. (Nasdaq: CMII), a special purpose acquisition company with a focus on the life sciences sector, since its initial public offering in February 2021. Jason holds a Ph.D. in Biological Engineering and a B.S. in Chemical Engineering and Biology from the Massachusetts Institute of Technology. Show Links:  Jason Kelly - Co-founder & CEO of Ginkgo Bioworks | LinkedIn   Ginkgo Bioworks The Plausibility of Life: Resolving Darwin's Dilemma Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @jrkelly Show Notes:  (0:00:00) - The Difference Between Software Engineering and Biological Engineering (0:06:51) - Abstractions and Infrastructure in Synthetic Bio (0:09:23) - The Role of AI, Foundation Models that Speak Biology (0:13:17) - AWS for Cell Engineering (0:17:52) - Where are the AI-discovered Drugs? And Data at Gingko (0:19:12) - Pandemic Response and Biosecurity in the Age of AI (0:22:47) - The Likelihood of Existential AI Risk from Lone Actors Harnessing Viruses, and The Need for Defense-in-Depth (0:31:47) - Will Progress in AI Be Biologically Inspired? And Evolution

Transcript
Discussion (0)
Starting point is 00:00:00 Biology is undergoing a digital revolution as we build developer tools in production infrastructure for synthetic biology. How will it change industries? How does it intersect with AI? And how do we rethink biosecurity? This week, Sarah and I are joined by Jason Kelly, co-founder and CEO of Ginkgo Biworks, to discuss their goal of making cells as easy to work with as computers, their data strategy, and the tech keeping the next pandemic at bay, and in general what cell programming will do
Starting point is 00:00:30 for the future of food, medicine, and agriculture. Jason, thanks so much for joining us today. Yeah, thanks for having me on. So I think there's a lot of talk about synthetic biology and how biology and DNA and proteins are effectively just code and you can manipulate them in different ways now and things like that. I'd love to just get your view of both what Ginkgo does
Starting point is 00:00:47 as well as what does synthetic biology actually mean. So I think the founding idea of synthetic biology is that DNA is code, right? And inside of cells are AT, C's, and G is essentially on like a tape. And it is very, like, surprisingly analogous to zeros and ones, you know, inside memory and a computer. That's roughly where the similarities end, okay? Like, once you get to the next step of what the cell does with that code, we are in a totally different world. It is not virtual, is the first thing, right?
Starting point is 00:01:17 It is a physical thing. The code itself is literally physical, right? It is a polymer. And it is going to use that to make proteins, which are basically little pieces of nanotechnology, and they're all going to be bumping into each other. and it's all crazy. It's not physically isolated like you would imagine
Starting point is 00:01:32 with a semiconductor chip. It's not built by humans. So you have this really interesting thing where the hook is there for people in tech to engage with biology. But then once they get in, they're like, what the fuck? And so like, I'm happy to talk about those pieces,
Starting point is 00:01:47 but I think you're right. The core idea of symbio is that it runs on code. And then what can we bring over from programming into this world that actually sticks? And so I think what synthetic biology has been, And really since it got going, I met the founders of Kinkgo back when we met at MIT in 2002. That was like early days of SIPBio.
Starting point is 00:02:05 It's about 20 years now. It's basically engineers asking the question of what can they bring over into biology that's actually going to work. And some stuff has been left by the wayside and some things do work and the latest technology that's being tried now is AI. Can you walk us through what you actually think does transfer over and then where there are one or two unique challenges and then how does AI help to solve for some of those things? I'll tell you like a funny story, right? So one of the fellows I started the company was this guy, Tom Knight, right? And Tom Knight started on the faculty at MIT in 1972, okay, right? Like mainframe computers, punch card computers.
Starting point is 00:02:37 He was a computer architect for a very famous mini computer, which was like the size of refrigerator called the LIS machine, okay, like symbolics that company is one of the founders of, like old school classic Stephen Levy hackers in the book kind of guy, right? Mid-90s, he realizes this thing about DNA is code. And basically it's like forget computers. I'm moving into programming DNA. He's still Tom, right? He's been teaching the semiconductor course
Starting point is 00:03:00 for 20 years at MIT at this point. Opens a wet lab in the MIT computer science building. Starts growing bacteria, freaking everybody out, right? And he puts up this flag, and he's like, hey, computer scientists, like DNA is code. If you're interested in this thing, like come over and try it out, right? Some of us came over and we're like, all right, cool. We got there, got our hands wet, and we're okay with it.
Starting point is 00:03:20 A lot of computer scientists, they get there. Tom's like, okay, here's the lab bench. Remember, this code is physical. So if you want to compile it, I'm going to have to teach you how to do molecular cloning. And here is a pipette. And you're going to sit at this bench and you're going to do these steps. Okay? And the person would do them.
Starting point is 00:03:36 And they'd get a result the next day. They're like, wow, that's really interesting. And then they'd do the same thing again the next day. And they would get a different result. And they'd be like, Tom, I just did the exact same thing. And I got two different results. Like, what's going on? And he was like, you're never going to know.
Starting point is 00:03:51 And it would just break their brains, right? Like, they'd be like, fuck this. I'm out of here. You're leaving a world in computer science of like pure logic, right? At the end of the day, if there's a bug, you can always run it to ground. And that's because A, these are systems that run like clocks, B, we design them. We design them, right? And so, like, you at the end of the day can go in and figure it out.
Starting point is 00:04:14 And in biology, it's like sometimes you can, right? And sometimes it's just a part of the biology that we just frankly don't understand well enough that's broken and like tough luck. And you got to like stomach that. And one of the things, my favorite things about AI is like, how does that neural network? Nobody knows. Yeah, yeah. Like you're about to experience like the analysis of these neural nets is going to look like systems biology, right?
Starting point is 00:04:38 It's going to be like go in and like try to back figure out a thing that you didn't design, my friends. And so like that'll be your first taste of really feeling like a biological engineer, right? But like, why bother? Like, why work with these neural nets that, my God, you actually can't easily debug and understand why it's hallucinating and all this stuff. And the answer is, because they're powerful. It's worth it. And that's the same reason you want to do biological engineering. Like, even though it's unpredictable, even though it's going to be so frustrating, it's not going to do
Starting point is 00:05:07 what you want and blah, blah, blah. People are going to, like, it's because the substrate is incredible, right? It self-replicates. It self-assembles. We have nothing else like it in the physical world. So you want to work with it, even though it's hard. That, you know, that's what, that's what ultimately gets people passionate about this stuff. Yeah, it totally makes sense. And I think one can argue that neural networks are actually heading even more in that direction because as people build systems that can code themselves, we're going to end up with evolutionary systems that are completely non-designed.
Starting point is 00:05:33 Yeah. And I think then we're truly in the world of biology where you have evolution kicking in. And, you know, to your point, evolution is really messy, right? It's always optimizing for the utility of something versus the form of it. It reuses parts aggressively. It creates enormous redundancies in weird ways that you don't know that there's a a perturbation here, and it propagates across, like, in weird ways. And so I think people are really underestimating what happens once we have self-evolving neural nuts, which I think is coming
Starting point is 00:05:59 quite soon. Be still my heart. It's going to be great. It's so cool. I mean, that's so cool. It's so worth it. Yeah. I mean, it's neat because there's a magic to it, right? Like, again, people like different stuff. And, like, I get that. I think this would be, like, one of the big cultural divides. There's a certain kind of mind that really likes things to be predictable. There's some people who just like the magic, right? Like, some of what's cool about biology is how hard it is to understand. So how does Ginkgo go about harvesting all this, you know, this shift in biology in terms of the ability to manipulate these systems using molecular biology and molecular cloning
Starting point is 00:06:30 techniques and then software tooling and other things? Can you tell us a bit more about the company and where you focus and what you've done with it all today? So one of the ideas that we tried to bring over from computer science was abstraction, right? And what is abstraction? Well, in Tom's era of computing, in order to be a computer scientist, you had to be an electrical engineer. Because how do you program a computer if you don't know how a computer works? Now, obviously today, like eight-year-old is able to program a thing on their iPad by drawing boxes
Starting point is 00:06:58 around. It's like, what happened, right? Like, well, assembly language, operating system, programming language, graphical programming language. We built all these abstraction layers to split the disciplines of electrical engineering and computer science into their own paths, both of which had very long roads, right? And so one of the big things we did at Kinko, we started the company 15 years ago, I'm like unimaginably a long time here, was to do that split from the get-go. So we have part of our infrastructure, we call it a foundry,
Starting point is 00:07:23 taking a page from semis, that is basically a group whose whole job is automate and scale the lab work, okay, and move away from a system where that lab work is being done by hand by a scientist. And then the DNA programmers who are really typically scientifically trained and Ph.D. biology types,
Starting point is 00:07:42 they order from that system to get their work done. That is actually very different. difficult to pull off. It's culturally difficult because like a scientist does not want somebody else to do their experiments and like they're a good scientist. You know, like there's a whole long laundry list of why it's hard. Not to mention that when you first try to build the infrastructure, it sucks. Okay. And so the foundry team has been able to drive enormous scale economics in doing the lab work, which gives us the data of lots of different genetic designs that we've tested, which is exactly going to be useful for the AI stuff, but it's also just generally useful, right?
Starting point is 00:08:13 because you've got to try a lot of designs to get the cell to do the thing you want it to do. And so that's been probably like one of the biggest activities the last decade at Ginko has been, like, driving that scale. Jason, what's the right way to think about the abstraction between, like, your customers and then your DNA programmers? Like, what's the spec that gets passed over or how should we understand the science they do versus you? So today, the way that it works is basically a customer of Ginkos would be like, you know, like a recent customer's Merck. Okay, like Merck, Novonortis, Biogen on the Pharmacide, Bear, Sygenta, Corteva, the biggest ad companies in the world, all customers, then a lot of startups, right? And the way they interface with us is that we basically are agreeing on a spec. We're like, okay, here's what I would like
Starting point is 00:08:55 the sell to do. They tell us. And we agree on like a timeline to develop it. And we're kind of like a prop software development shop. Like we're going to like make it for the customer and then license it to them. And they'll take, and they'll own it to go develop their product. And in exchange for that, we'll get a royalty and we'll also get some payment along the way. All right? That's the business model. Today, their scientists don't use our infrastructure. My scientists, our scientists here at Ginko, they use the infrastructure and they have this interface with the customer about hitting goals. That's mostly a technical limitation. I think it would be very cool ultimately to have scientists and all these companies
Starting point is 00:09:31 accessing our infrastructure directly. It's just too early. That's the problem. And then how does the AI come into the picture? When and how did you start using it and has this current wave of AI impacted you or how much do diffusion models, al-alams, et cetera, matter relative to what you're doing? So the short answer is, like, we have, we do a lot of protein engineering. So, you know, you want to program of bacteria, right? Okay. So bacteria has a three million letter genome. And, like, a customer has asked you to express a protein. And remember, a protein are basically like the little pieces of nanotechnology inside the cell that bump into each other and, like, do all the things. Like, you're sitting there. You're like a big giant bag of
Starting point is 00:10:09 proteins, right? And so, and so, like, they want to make a lot of this protein because it's going to go into cold water laundry detergent. Okay, so people don't realize this, but, like, the reason cold water or laundry detergent doesn't need hot water is because there's enzymes in there, okay, proteins. And so they want to make a lot of this protein. And by the way, if they could make it more active, like break up dirt faster, like whatever reaction, it is catalyzing. So if you remember chemistry class, like a catalyst makes a certain chemical reaction happen faster than if you don't have the catalyst. Okay, so this enzyme is a catalyst, and I want to make it also just better. So I want to improve the quality of the enzyme, and I want to make a ton of it. All right,
Starting point is 00:10:46 that's the spec. And so how do we do that today? Well, we would have, for example, a host strain that's really good at producing a lot of protein to begin with. Okay? So think of that more like an existing software library. So that's one form of leverage from existing data assets. It's literally like a hard physical asset, an actual microbe with a genome that I engineered and a project previously that is useful for your project. So now, now we're starting from a good place. We're already starting to make a lot of protein. But you want to make it more active. You want to make more of the catalysis. Okay, how do you do that? Well, remember, that protein is encoded in DNA and the sequence of DNA determines the effectively everything about that protein,
Starting point is 00:11:28 but in this case, what you care about is how good of a catalyst is it. And so you go in there and you have certain tools to try to model the protein, this, that, and the other thing. And you make some choices and along with the software tools and you say, I want to try these 1,000 designs of the DNA in the lab and see how they do. All right. And you then get that data back on how they perform. You use that to update these design tools you have and you do it again. And that's what we've been doing at Ginga for a long time, including with neural nets and all this stuff, the latest and greatest, you know, all that. But just on that like data asset. I think the new idea is on the back, and we just announced a deal with Google a couple weeks ago, the new idea is,
Starting point is 00:12:12 can I make a foundation model that will be additive to what I've previously been doing just with the data I was getting on enzymes? And so now I have a foundation model that really is not specific to catalysis or anything else. It like speaks protein, you know, right? Like, just like GPT4 speaks English, right? That's what we're going to try out with Google. That's what we think is like a really new idea. And people, you know, there's people obviously work on it. And obviously Google themselves
Starting point is 00:12:39 with Alpha Fold was like one of the first generations of this. But we see a lot of ways to make it better and make it bigger and all the things. And so we'll see. And we'll see how it goes. Let me just also give you like one other thing
Starting point is 00:12:50 why I think bio is particularly interesting for folks that are interested in AI in general. So this whole idea of like a foundation model plus fine tuning with specialized data, right? Like we all like, all the people that pay attention to AI like understand this idea, right? So let's just take it in one of the categories. of English, like, legal, right?
Starting point is 00:13:06 Like, Alexis, Nexus, they have all this data. We're going to fine-tune GPT. All the stuff, okay. That thing has to compete with, like, a lawyer at Ropes and Gray, right? And a lawyer at Ropes and Gray has trained for 15 years, you know, being taught by other humans, how to do law. They are writing contracts that were designed to be understood by human brains. They work the way we think.
Starting point is 00:13:27 They're writing that contract in English, a language that have co-evolved with our brains, you know, just language in general, co-volve with our brains, to also give us leverage from how our brains work. And we're asking this computer brain, a neural net, to compete with us on our turf. But it's a pretty high bar that it's got to compete with. Now let's go over into biology. I remind you, it runs on code. Sequential letters feels a lot like language. It ain't our language.
Starting point is 00:13:56 Right. We did not invent it. We did not understand it. We did not speak it. We did not read it. it. And so I feel like these computer brains are going to kick our ass a lot faster in this domain than they do in English, right? Like you know, like I think the applications in AI are all going to be like, replace the intern, not not the partner at Rose and Gray at best for a while.
Starting point is 00:14:22 Whereas over in bio, it should be, it could quickly become the best. If you're looking to understand And like where AI is really going to flip the script and not be like kind of a low level like Christensen style disruption, which I think is sort of what's happening in English, but rather be a like split in the atom, it's bio. When people talk about protein folding related models, you know, and to your point, there's things like alpha fold and there's a few new companies that have been set up to basically focus on protein folding models because of the breakthroughs in AI, they kind of divided into a few markets, right?
Starting point is 00:14:54 There's sort of the pharma market, which is better designed for pharmaceuticals and biologics that are used as drugs. And then there's more of the industrial, the catalyst, the ag, those sorts of things. I'm sort of curious, like, how do you think about those relative markets just in terms of sheer market size? Because if you look at the cost of developing a drug, it's like $1.5 billion per drug, but very little of it actually goes into the underlying molecule that's being used, relative, most of its clinical trials.
Starting point is 00:15:20 But for ag or for catalysts or for other things, a lot of it could actually go into the molecule. So I'm just sort of curious how you view those as relative markets for this kind of stuff. So Gingo has like an AWS style business model, right? Like you need compute. I don't really care if it's, you know, if you're AWS, whether it's medicine or their startup or your video streaming, like rock and roll, right? So we have an attitude that like we're supporting cell engineering wherever it is.
Starting point is 00:15:42 It is definitely different by market, like pretty substantially, right? Both the assets you need, the enterprise sales, everything is different. And the biotech, you know, it's not a small industry. But it depends like which side of the house you're looking at. If it's fees, it's probably a little more, the more valuable markets that you could get more research fees. But royalty is a different game, right? Like, certain things go to market a lot faster.
Starting point is 00:16:05 That's how I see it. But the real problem is, like, there aren't any platform services. So, like, the other thing that's just wild to people in the tech industry is, like, where's the fast platforms? Like, where's all the horizontal stuff? Where's the operating systems? And it's like nowhere. Like, like, vertical integration, vertical integration, vertical integration.
Starting point is 00:16:21 Merck, Pfizer, bear, shingenta. Like, every one of these companies is, like, its own. text tag top to bottom, you know, and the closest thing they do to anything vertical is buy equipment from the same people. But like, it's really fascinating, like totally different industry structure. And so that surprises people, right? Jason, what's the most rational way to understand that? Because you look at that as a non-bio person and you're just, yeah, it doesn't make any sense, right? So the rational way is the work being done cross-product is too dissimilar to support common platforms. I don't agree with this. Obviously, my entire business model is predicated on that statement
Starting point is 00:16:59 being false. But that is the reason. You're like, oh, well, because the platform you build for customer A will never read on customer B. And so now you're just bad, right? Like, you're having to build a new platform for every customer. You're getting, you know, leverage. You're getting, you get, none of the reasons that, like, operating systems and data, like, why does data centers work? Because compute's really generic, right? And, like, you can use software to make it different. but the hardware underneath is all common. And now we're seeing a little edge case difference here, right? Like, actually, the CPUs aren't that useful for the AI.
Starting point is 00:17:30 Now everyone's freaking out about the GPUs. So we're having, like, an instance of, like, hardware variability. But the argument would be that type of hardware variability you're seeing is sort of like per company, okay, right? Like, you know, or at least per modality in pharma, which modality is like a fancy word for type of drug, right? Like, gene therapy is going to have very different stuff than, you know, whatever. And there's truth to that, right?
Starting point is 00:17:49 Like, it's not a false statement. It's just a question of degree. And we happen to believe that when it comes to the engineering of organisms, that that is at parts common. But plenty of people think we're wrong. Yeah, it seems like if you just go back to this analogy of like human design computation where you're building systems from the ground up that you can understand. But you're all discovering the same set of systems with common building blocks and the need for data analysis. It would shock me if that would not eventually be true. and it's a like a temporal cultural figment of these companies.
Starting point is 00:18:24 Okay, maybe one more general question about how to think about AI in pharma writ large. Why do you think we haven't seen AI discovered drugs yet? Because people have been talking about it for a long time. Will we and will we see it soon? Well, first off, I would say like the people have been talking about for a long time. It's sort of like saying three years ago, like, why haven't we seen good natural language processing in AI? people have been talking about it forever right so i think there is an element of like you need the breakthrough right has the neural net's been big enough the big limitation in bio is the
Starting point is 00:18:56 availability of data to train these things right and so you have this tough situation where like everyone who's doing these models of training on the same data right and so one of our advantages ginkos we just have a ton of data that's a real gap that i think is i think partially it's like have people gone big enough for it to have happened yet and i think i think now people are trying like we're going to try recursion is another great example and like yeah it might still not be big enough or more likely it's not enough data, right? And like, there's nothing stopping you from making a giant neural net at this point. You know, like, the tech industry is going to commoditize that infrastructure. But like, you might not have enough data to give it to solve
Starting point is 00:19:31 the problem you're asking, Sarah, right? Where does Ginko's data come from? Is it like your own experimental data? Yeah, we have a 300,000 square foot robotic lab that they built in the last 10 years. And so we generate that. And we do it in service of our customer projects. We can do our own data generation. But yeah, that's where it comes from. I want to talk a little bit about one area that Ginko has been an expert in, which is infectious disease, right? Can you talk a little bit about the work you guys do here? And I think a question everybody cares about is like, are we prepared for another global pandemic? What has changed since COVID-19? Yeah. I think like the reality is infectious disease is really scary and bad, right? But like the big, the big lesson of COVID
Starting point is 00:20:12 is modern health care systems and our current infrastructure does not render us immune even in the developed world from pandemic scale infectious disease, period. We don't just allow ourselves to not have defenses
Starting point is 00:20:28 against things that are like society killers that are known to exist. You know, right? Like, this is not like, like, you know, a fanciful idea. Like, it freaking happened two years ago, right? You know, and so the, so what should we build, right? And the answer is, like, a lot of different things, right?
Starting point is 00:20:44 Like, we should build rapid vaccine response, which is really good through Operation Warp Speed and kind of what we figured out with MRA vaccines and just like, I got a target. I got vaccines for the entire country in three months. Not every version of this thing is vaccinatable. The other one when we've been big believers in is like monitoring, like radar. I grew up in Florida, right? Like, we have radar systems that warn us for hurricanes. Okay?
Starting point is 00:21:05 My co-founder of Tom Knight, obviously a lot older than me, was explaining to me that when he was a kid, they would get three hours warning for a hurricane. Currently, we find out about hurricane COVID after it has landed in New York City a week ago. Okay? Like, unacceptable, right? So one of the things we're doing is, like, with the CDC, we run programs, we collect wastewater from inbound airplanes,
Starting point is 00:21:29 and we sequence the DNA, and we look for pathogens. We monitor variants and all this, both for flu and COVID, and I can add other things to that list. We have a similar program, actually, in Doha, Airways. report in Qatar, we've got a program in Ukraine. And think of these like bio radar stations. And that's get you baseline. Because you also want to, you look for anomalies, right? So like that whole thing has been missing. And so we think that's like where you start and then you want rapid response. We're like basically patch. Think like cybersecurity. Like it should
Starting point is 00:21:58 feel like that's your answer, right? Like like cybersecurity, I think is a bit the mental model for like what the future of infectious disease response looks for like persistent monitoring, rapid response, kill it. And the beautiful thing is, and it's scary and beautiful. Remember, these things replicate. So if you can snuff it out at the beginning, you win. Like, speed will matter, right? I'm chair of a national security commission down DC on emerging biotech. And, like, the DOD just put out their biosecurity. It's like basically biodefense posture review. And like, the DOD maintains like millisecond preparedness in this country, right? Like, I know that seems crazy, but like that's kind of like how you ought to treat these things. Yeah, you actually saw that with SARS, right?
Starting point is 00:22:37 SARS, they both snuffed out with the original form, but then it leaked four times in the first two years after it was cultured in a lab. It kept leaking from what eventually became the Wuhan Institute of Biology when they moved it from Beijing to Wuhan. And it was really rapid response in terms of shutting down SARS outbreaks. It really helped prevent it from spreading. And so I think to your point, there's good precedent in terms of trying to prevent spread and having it be effective, assuming that the coefficients on the disease spread are reasonable. Yeah. Certain ones are going to be harder than others, right? But like, it's not going to any precedent. You just need logic. If you could get it early, you win, right? Now, there's a question of how hard is that? And, like,
Starting point is 00:23:13 COVID was a lot harder than MERS, right? Because for a variety of reasons. But there's probably a level of tooling that could even have stopped, that could even, like, snuff a COVID since we had nothing when that happened. Sure. I guess one of the things that people in the AI safety community bring up quite a bit is that one of the big risks that are associated with the use of AI and LLMs and these foundation models for biology is that there's some risk of a lone actor somewhere deciding to build a virus that is infectious and deadly and can sort of run through the population rapidly. How much of a risk do you think that really is? The idea that we know how to like exactly like design for that sort of thing is low. You could try something. It's not like, oh, I know for sure.
Starting point is 00:23:53 It's just someone's waiting to do it. Remember, you need data. It's hard to accumulate that. Yeah, it's easy for me to accumulate data on enzymatic catalysis. It is a little bit hard to accumulate data on case fatality, right? And it's very hard to do because the argument I always hear from the safety community is, oh, the lone actor will, of course, be somebody who isn't that well-versed in biology anyhow, because the people who well-versed in biology are unlikely to do these types of attack. So it's kind of this really weird needle that's threaded in the community to try and make arguments that, to your point, seem to not really hold up relative to the reality of
Starting point is 00:24:24 what's needed to actually pull something like that off, at least today. I would basically agree with that today. The only thing I would say, though, is, like, we are unacceptably exposed to these things. Like, we would not tolerate, like, in our computers, our human defenses against viruses. Like, we would not allow our computers to be as exposed to viruses as we allow ourselves to be. Okay? We're talking about, like, technological solution, things in the background, right? And there's an entire edifice handling that, right?
Starting point is 00:24:55 Including, like, detection, letting other nodes know all around the world, all around the world, all this stuff, like, but all happening in the background. That's what this should feel like if it's done properly. So whether you're worried about a loan act or not, you don't even have to worry about that. Nature's going to toss it out at us again. We should have it ready for that. Yeah, I think you laid out like a really rational program around pandemic response. And I think most of the people lay out things that are, I feel in some cases, actually
Starting point is 00:25:17 subtractive. So I think your points on global monitoring makes a ton of sense. Your points on having rapid response, vaccine generation makes a ton of sense. So I think those are like really smart, grounded approaches. It's kind of interesting because one of my big lessons from me, COVID when I looked at the biosafety levels that were actually enacted at some of these labs and you're collecting large masses of bat viruses, right? And I remember when I used to work in a lab at MIT, I'd be working with different viruses and different agents and things like that for gene
Starting point is 00:25:41 therapy purposes. And you look at the biosafety level and there'd be somebody in a hood and they kind of rub their shirt and they'd walk out the door. And, you know, one of the things that I almost get comfort in is I'm like, wow, there's been so few actual lab leaks over time relative to the poor behavior in labs themselves, that it's really hard to actually have something jump into humans. Well, I mean, look, Eli, I mean, to some degree, we are the evolutionary product of being able to defend against that stuff, right? Like, you know, like, if it was easy, we, like, we wouldn't have made it, right? Like, you know, like, there's other species that, like, didn't get our, I mean, didn't get the immune system we got. And what you have in every organism on the
Starting point is 00:26:19 planet is the integration of four billion years of incident solar radiation on the entire planet. They've been trained over evolutionary time. Yeah. A time that we have a really hard time comprehending how much energy that is because of the time scale. Yeah. One of the things that I think is really unique about Ginko is some of the decisions you've made as you've built a startup. So for example, I believe you have super voting shares for all the employees as like a public company. Could you tell us a little bit more about ideas like that that you've enacted and how you thought about them? It's really cool stuff.
Starting point is 00:26:46 Yeah. We're kind of a weird bug, right? Like we started out of grad school in 2008, right? So like straight out school, totally common in tech. Not at all common in biotech. Okay. Right. So we were like unfundable, it couldn't raise money. It was 2008. And we weren't developing a drug. And so like biotech people don't back that. We were developing like a platform for programming cells. And the tech people wouldn't back it because like, what are you kidding me? A wet lab. And so we were like not funnable. We five years of government grants. Darpa, ARPA E, NSF, SBIR. So we were the first biotech to do Y.C because Sam had just taken over from Paul and he wanted to do nuclear biotech, all these things. Right. I do think like the entrepreneur energy is right is common. So you even. even if you're a hard tech company, I don't think there's like a different, you don't need a
Starting point is 00:27:28 different entrepreneurial training than what, like, YC has perfected, you know, right? Like, that's a good thing. So I think there you're okay. I think we were, like, finding out too late that things like social networks and large-scale tech platforms also have enormous real-world consequences, but they were sort of getting a pass because they were in the world of bits. And people are like, yeah, bits, you know, right? Like, whatever.
Starting point is 00:27:47 It doesn't feel that dangerous, right? But once you're talking about a drug, you know, you're putting a thing in a kid's body. You know, like, it's like a medicine, you know, like, there's just stuff that's like, you know, you can't fuck around with it, right? And so we're building a powerful platform at Ginko. So one of the questions was, who should control it. Like, how do you make these decisions about, like, who can use it? Like, platform ethics stuff. All the stuff that now is being talked about in AI, because AI is finally making people be like a little bit like, oh, bits, you know, right? Like, maybe it is scary. And so, like, it's just a little more at the front for those of us that have already been hanging out in Adams world, right? And this is why people like, I think there's like a hard time between pharma and in terms of cultural, like, not overlap because, like, the tech people feel like the therapeutics people are, like, losers and slow because they're lame and not ambitious, but they're all, but these guys have clinical trials where people die, right? And so, like, it's being at the coalface of actually building things that really inflect on, on people in the world in a way that's not second order, like information technology, creates a different kind of culture,
Starting point is 00:28:47 okay? And so who should control that a platform that if we're right and is successful will powerfully read on people's lives, right? One answer is like the founders, right? That's Facebook, right? Like, Mark's got super voting and his kids do or some insanity. The other option is capital markets, right? You know, black rock, you know, right? Fidelity. Like, that's just every normal corporation that isn't founder controlled in Silicon Valley where like the voting shares are a majority held by arm's length capital. And they, if the CEO is not doing anything they want, they bring in a different board and fire the person. But like, it comes down to the control is, is who has the share votes,
Starting point is 00:29:24 when this is in a public company, ultimately, hires and fires a CEO, ultimately then sets the control of the platform. Okay? And so at Ginkgo, we took the Silicon Valley idea of founder super voting shares
Starting point is 00:29:36 and extended it to the entire employee base. So it's not just us. Anyone who's at the company, and it goes away if you leave, gets 10x voting for their B shares versus the A shares, which have 1x voting. And so the way the math works
Starting point is 00:29:49 is that the employee's own more than 9.1% collect. effectively, you multiply that by 10, and it outvotes the remainder. And so the theory was, who should control the platform? Humans. Okay, not divorced capital, because that's not their priority. They're kind of like, my job is to get a high return. At the end of the day, it's your job company leadership to, like, decide how to do it. But in reality, what that means is company leadership, primacy is the return. Okay. And so that seems eh. And then what we've decided is a persistent thing is the employees, okay, the workers, because they,
Starting point is 00:30:22 are humans. And they actually work there and they go home to their families on Thanksgiving and have to explain why they work at this company and are proud of it. And that may be long term, this is a theory, is a good group to give governance to. Yeah. It's a really cool approach. And I think a lot of the early sort of super voting share stuff was pioneered in the media world, right? So the New York Times, the same family still controls it, you know, 100 years later because of super voting shares in the family. And that's why it's the New York Times. Sure. The only reason that place is the New York Times, is because, like, humans have control, not capital. So I guess one potential question about the model, because I think it's a really smart and
Starting point is 00:31:02 unique model, is sometimes CEOs have to do really unpopular things. And if you don't have the founder authority and you come in and you do something really unpopular, maybe you do a big riff and the people who are left are really upset about it, but you really have to do it for the business to survive. Or you make tough choices that may be at odds with the employee base, how does that impact governance later, where to some extent, you could argue the motivation. If you're not really answering to your board, you're answering to your employees purely, you may fall into more dynamics around popularity contests or trying to appease people around tough decisions to make. And as a founder,
Starting point is 00:31:38 you have the moral authority to do those. As a hired gun CEO, I think it's much harder. Agree completely. Yep, I don't dispute any of that. So, and the answer is share voting. Okay? So it's not like one person, one vote. Sure. So how do you accumulate? more shares at a company work there longer, okay, to build the value, like work there when it was cheaper and grow the value. So there will be a wading against all theory. I don't know, I don't know, but like what I'm imagining is you'll have, you know, employees who own a lot and are like, yeah, that's a hard decision, but it's right for the organization, right? And like, I don't think that that's out of school. I think you can see that happen. But we'll see. I don't know. We're trying it.
Starting point is 00:32:20 Yeah, that's a very exciting experiment. That's cool. But it originates from the platform governance. That's like why we're actually doing it. I think some of these other reasons I think are interesting, and I like them because I'm like generally in working ownership. But I think the real point is like at the end of the day, someone has to have platform governance. Jason, you've expressed like this awe around the result of humans, you know, in four billion years of evolution.
Starting point is 00:32:42 We're incredibly energy efficient versus neural networks. Everybody knows that. You now have the very largest labs talking about spending literally a trillion dollars in compute over the next decade. If you think of that as maybe half energy, and then you have to make assumptions about energy prices. But now you're talking trillions of kilowatt hours. And maybe you're off by magnitude. But like, where's that money going to come from becomes a big question?
Starting point is 00:33:04 These things have to get more efficient. If you compare that to, like, humans, like maybe we spend 5,000 kilowatt hours before we learn to read, right? Do you think we get more biological inspiration in AI? Or do you have any point of view on the intersection of this from an architecture perspective? I don't have a great intuition Other than I think, you know, the other option is we just do giant brain in a vat And we throw that up against GPT4
Starting point is 00:33:27 You know, like, why are we just limiting ourselves To a brain that fits in our head? Why don't we grow a room-sized brain And just go straight biological? Yeah, have you thought about that, Elon? I mean, I think that what's cool about neural nets Is that like brains basically allowed computer scientists to like escape their world
Starting point is 00:33:47 of like logic, like back to the beginning of our conversation. It was an excuse for them to basically build a piece of software that was going to like, they weren't going to understand how it worked. There's probably a lot more things like that because the community that build software wants to understand it because that's the kind of people that historically have been good at building software, right? So like open your minds, right? Like I'm sure the neural net is not the best architecture, right? But like, you know, and I know people are working on it, but like really go in different directions, right? Like do something crazier. You're raising a really key point, which I think was back to part of the conversation.
Starting point is 00:34:17 around evolution as the driver for all sorts of optimizations that you don't expect. And if you are a rational person and you look at a biological system, right, there's the gene that can be coded, that you can produce RNA in either direction and it produces two different proteins and why would you ever do that? And one of them is actually duplication of this thing that got repositioned for catalysis for this other thing. And so it's really messy, weird systems that evolved. And I think the second you have self-replicating systems where you have code writing its own code, and you start going down that evolutionary path, you should have hyper-optimization for energetics and for all sorts of other things, because it's just going to be
Starting point is 00:34:55 part of the utility function that gets selected for by that system. And I think that's when you get out of the realm of, you know, hand-picked design and logic and laying it down and just like an explosion of stuff, right? It's kind of the Cambrian explosion happened for a reason. And that reason was evolution and resources, right? Yeah. There's a very good book, if you like to nerd out on this particular line of stuff called the plausibility of life. It's awesome. And I personally think instead of learning from brains, I agree with you. You want to learn from evolution. Like, because evolution itself evolved things to become more evolvable, right? Like the example they give in the book is really cool. I'll just say it. It's like skeletal system. So your skeletal system, like you can have a person
Starting point is 00:35:35 you've seen like that can have like a six finger. Like that happens sometimes, right? Have you noticed it's not just like a bone jutting out of their hand? It's like wrapped in skin and nerves. well that's because your skin and nerves are adaptive to bones and that as you can imagine for exploring body plans is a much more efficient way to explore the space of body plans so if you jot out a bone and maybe it's going to be better but it's definitely not going to be better if it's not wrapped in skin okay and so like for however it happened there was like this layering and evolution where like we created the system created skin and nerves to be adaptive and the exploratory part of it is the bones And again, I'll emphasize four billion years.
Starting point is 00:36:15 It's a long time, a lot of energy that has been spent. I know it was a random walk, but like evolution figured a lot of cool stuff out. And I think that is like totally untapped. For any of the scientists exploring alternative architectures out there, if you're like going to do any sort of crazy mixture of exports routing, like wrap the skin around the bones is the advice we have from Jason. Thank you so much for doing this. This is really fun.
Starting point is 00:36:37 Yeah, super fun. Thanks, Sarah. Thanks you love. Find us on Twitter at No Pryorier's. podcast. Subscribe to our YouTube channel if you want to see our faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-dash buyers.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.