Dwarkesh Podcast - Sam Bankman-Fried - Crypto, Altruism, and Leadership
Episode Date: July 5, 2022I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiri...ng A-players, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Episode website + Transcript here.Follow me on Twitter for updates on future episodesSubscribe to find out about future episodes!Timestamps(00:18) - How inefficient is the world?(01:11) - Choosing a career(04:15) - The difficulty of being a founder(06:21) - Is effective altruism too narrowminded?(09:57) - Political giving(12:55) - FTX Future Fund(16:41) - Adverse selection in philanthropy(18:06) - Correlation between different causes(22:15) - Great founders do difficult things(25:51) - Pitcher fatigue and the importance of focus(28:30) - How SBF identifies talent(31:09) - Why scaling too fast kills companies(33:51) - The future of crypto(35:46) - Risk, efficiency, and human discretion in derivatives(41:00) - Jane Street vs FTX(41:56) - Conflict of interest between broker and exchange(42:59) - Bahamas and Charter Cities(43:47) - SBF’s RAM-skewed mindUnfortunately, audio quality abruptly drops from 17:50-19:15TranscriptDwarkesh Patel 0:09Today on The Lunar Science Society Podcast, I have the pleasure of interviewing Sam Bankman-Fried, CEO of FTX. Thanks for coming on The Lunar Society.Sam Bankman-Fried 0:17Thanks for having me.How inefficient is the world?Dwarkesh Patel 0:18Alright, first question. Does the consecutive success of FTX and Alameda suggest to you that the world has all kinds of low-hanging opportunities? Or was that a property of the inefficiencies of crypto markets at one particular point in history?Sam Bankman-Fried 0:31I think it's more of the former, there are just a lot of inefficiencies.Dwarkesh Patel 0:35So then another part of the question is: if you had to restart earning to give again, what are the odds you become a billionaire, but you can't do it in crypto?Sam Bankman-Fried 0:42I think they're pretty decent. A lot of it depends on what I ended up choosing and how aggressive I end up deciding to be. There were a lot of safe and secure career paths before me that definitely would not have ended there. But if I dedicated myself to starting up some businesses, there would have been a pretty decent chance of it.Choosing a careerDwarkesh Patel 1:11So that leads to the next question—which is that you've cited Will MacAskill's lunch with you while you were at MIT as being very important in deciding your career. He suggested you earn-to-give by going to a quant firm like Jane Street. In retrospect, given the success you've had as a founder, was that maybe bad advice? And maybe you should’ve been advised to start a startup or nonprofit?Sam Bankman-Fried 1:31I don't think it was literally the best possible advice because this was in 2012. Starting a crypto exchange then would have been…. I think it was definitely helpful advice. Relative to not having gotten advice at all, I think it helps quite a bit.Dwarkesh Patel 1:50Right. But then there's a broader question: are people like you who could become founders advised to take lower variance, lower risk careers that in, expected value, are less valuable?Sam Bankman-Fried 2:02Yeah, I think that's probably true. I think people are advised too strongly to go down safe career paths. But I think it's worth noting that there's a big difference between what makes sense altruistically and personally for this. To the extent you're just thinking of personal criteria, that's going to argue heavily in favor of a safer career path because you have much more quickly declining marginal utility of money than the world does. So, this kind of path is specifically for altruistically-minded people.The other thing is that when you think about advising people, I think people will often try and reference career advice that others got. “What were some of these outward-facing factors of success that you can see?” But often the answer has something to do with them and their family, friends, or something much more personal. When we talk with people about their careers, personal considerations and the advice of people close to them weigh very heavily on the decisions they end up making.Dwarkesh Patel 3:17I didn't realize that the personal considerations were as important in your case as the advice you got.Sam Bankman-Fried 3:24Oh, I don’t think in my case. But, it is true with many people that I talked to.Dwarkesh Patel 3:29Speaking of declining marginal consumption, I'm wondering if you think the implication of this is that over the long term, all the richest people in the world will be utilitarian philanthropists because they don't have diminishing returns of consumption. They’re risk-neutral.Sam Bankman-Fried 3:40I wouldn't say all will, but I think there probably is something in that direction. People who are looking at how they can help the world are going to end up being disproportionately represented amongst the most and maybe least successful.The difficulty of being a founderDwarkesh Patel 3:54Alright, let’s talk about Effective Altruism. So in your interview with Tyler Cowen, you were asked, “What constrains the number of altruistically minded projects?” And you answered, “Probably someone who can start something.”Now, is this a property of the world in general? Or is this a property of EAs? And if it's about EAs, then is there something about the movement that drives away people who took could take leadership roles?Sam Bankman-Fried 4:15Oh, I think it's just the world in general. Even if you ignore altruistic projects and just look at profit-minded ones, we have lots of ideas for businesses that we think would probably do well, if they were run well, that we'd be excited to fund. And the missing ingredient quite frequently for them is the right person or team to take the lead on it. In general, starting something is brutal. It's brutal being a founder, and it requires a somewhat specific but extensive list of skills. Those things end up making it high in demand.Dwarkesh Patel 4:56What would it take to get more of those kinds of people to go into EA?Sam Bankman-Fried 4:59Part of it is probably just talking with them about, “Have you thought about what you can do for the world? Have you thought about how you can have an impact on the world? Have you thought about how you can maximize your impact on the world?” Many people would be excited about thinking critically and ambitiously about how they can help the world. So I think honestly, just engagement is one piece of this. And then even within people who are altruistically minded and thinking about what it would take for them to be founders, there are still things that you can do.Some of this is about empowering people and some of this is about normalizing the fact that when you start something, it might fail—and that's okay. Most startups and especially very early-stage startups should not be trying to maximize the chances of having at least a little bit of success. But that means you have to be okay with the personal fallout of failing and that we have to build a community that is okay with that. I don't think we have that right now, I think very few communities do.Is effective altruism too narrowminded?Dwarkesh Patel 6:21Now, there are many good objections to utilitarianism, as you know. You said yourself that we don't have a good account of infinite ethics—should we attribute substantial weight to the probability that utilitarianism is wrong? And how do you hedge for this moral uncertainty in your giving?Sam Bankman-Fried 6:35So I don't think it has a super large impact on my giving. Partially, because you'd need to have a concrete proposal for what else you would do that would be different actions-wise—and I don't know that that I've been compelled by many of those. I do think that there are a lot of things we don't understand right now. And one thing that you pointed to is infinite ethics. Another thing is that (I'm not sure this is moral uncertainty, this might be physical uncertainty) there are a lot of sort of chains of reasoning people will go down that are somewhat contingent on our current understanding of the universe—which might not be right. And if you look at expected-value outcomes, might not be right.Say what you will about the size of the universe and what that implies, but some of the same people make arguments based on how big the universe is and also think the simulation hypothesis has decent probability. Very few people chain through, “What would that imply?” I don't think it's clear what any of this implies. If I had to say, “How have these considerations changed my thoughts on what to do?”The honest answer is that they have changed it a little bit. And the direction that they pointed me in is things with moderately more robust impact. And what I mean by that is, I'm sure one way that you can calculate the expected value of an action is, “Here's what's going to happen. Here are the two outcomes, and here are the probabilities of them.” Another thing you can do is say - it's a little bit more hand-wavy - but, “How much better is this going to make the world? How much does it matter if the world is better in generic diffuse ways?” Typically, EA has been pretty skeptical of that second line of reasoning—and I think correctly. When you see that deployed, it's nonsense. Usually, when people are pretty hard to nail down on the specific reasoning of why they think that something might be good, it’s because they haven't thought that hard about it or don't want to think that hard about it. The much better analyzed and vetted pathways are the ones we should be paying attention to.That being said, I do think that sometimes EA gets too narrow-minded and specific about plotting out courses of impact. And this is one of the reasons why that people end up fixating on one particular understanding of the universe, of ethics, of how things are going to progress. But, all of these things have some amount of uncertainty in them. And when you jostle them, some theories of impact behave somewhat robustly and some of them completely fall apart. I’ve become a bit more sympathetic to ones that are a little robust under thoughts about what the world ends up looking like.Political givingDwarkesh Patel 9:57In the May 2022 Oregon Congressional Election, you gave 12 million dollars to Carrick Flynn, whose campaign was ultimately unsuccessful. How have you updated your beliefs about the efficacy of political giving in the aftermath?Sam Bankman-Fried 10:12It was the first time that I gave on that scale in a race. And I did it because he was, of all the candidates in the cycle, the most outspoken on the need for more pandemic preparedness and prevention. He lost—such is life. In the end, there are some updates on the efficacy of various things. But, I never thought that the odds were extremely high that he was going to win. It was always going to be an uncertain close race. There's a limit to how much you can update from a one-time occurrence. If you thought the odds were 50-50, and it turns out to be close in one direction or another, there's a maximum of a factor-of-two update that you have on that. There were a bunch of sort of micro-updates on specific factors of the race, but on a high level, it didn’t change my perspective on policy that much.Dwarkesh Patel 11:23But does it make you think there are diminishing or possibly negative marginal returns from one donor giving to a candidate? Because of the negative PR?Sam Bankman-Fried 11:30At some point, I think that's probably true.Dwarkesh Patel 11:33Continuing on the theme of politics, when is it more effective to give the marginal million dollars to a political campaign or institution to make some change at the government level (like putting in early detection)? Or when is it more effective to fund it yourself?Sam Bankman-Fried 11:47It's a good question. It's not necessarily mutually exclusive. One thing worth looking at is the scale of the things that need to happen. How much are things like international cooperation important for it? When you look at pandemic prevention, we're talking tens of billions of dollars of scale necessary to start putting this infrastructure in place. So it's a pretty big scale thing—which is hard to fund to that level individually. It’s also something where we’re going to need to have cooperation between different countries on, for example, what their surveillance for new pathogens looks like. And vaccine distribution If some countries have a great distribution of vaccines and others don't, that's not good. It's both not fair and not equitable for the countries that get hit hardest. But also, in a global pandemic, it's going to spread. You need global coverage. That's another reason that government has to be involved, at least to some extent, in the efforts.FTX Future FundDwarkesh Patel 12:55Let's talk about Future Fund. As you know, there are already many existing Effective Altruist organizations that do donations. What is the reason you thought there was more value in creating a new one? What's your edge?Sam Bankman-Fried 13:06 There's value in having multiple organizations. Every organization has its blind spots, and you can help cover those up if you have a few. If OpenPhil didn't exist, maybe we would have created an organization that looks more like OpenPhil. They are covering a lot of what we’re looking at—we're looking at overlapping, but not identical things. I think having that diversity can be valuable, but pointing to the ways in which we intentionally designed to be a little bit different from existing donors:One thing that I've been really happy about is the re-granting program. We have a number of people who are experts in various areas to who we've basically donated pots that they can re-grant. What are the reasons that we think this is valuable? One thing is giving more stakeholders a chance to voice their opinions because we can't possibly be listening to everyone in the world directly and integrating all those opinions to come up with a perfect set of answers. Distributing it and letting them act semi-autonomously can help with that. The other thing is that it helps with a large number of smaller grants. When you think about what an organization giving away $100 million in a year is thinking about, “if we divided that up into $25,000 grants, how many grants would that mean?” 4,000 grants to analyze, right? If we want to give real thought to each one of those, we can't do that.But on the flip side, sometimes the smaller grants are the most impactful per dollar and there are a lot of cases where someone really impressive has an exciting idea for a new foundation or a new organization that could do a lot of good for the world and needs $25,000 to get started. To rent out a small office, to be able to cover salaries for two employees for the first six months. Those are the kind of cases where a pretty small grant can make a huge change in the development of what might ultimately become a really impactful organization. But they're the kind of things that are really hard for our team to evaluate all of, just given the number of them—but the re-grantor program gives us a way to do that. Instead, we have 10, 50, or 100 re-grantors, who are going out and finding a lot of those opportunities close to them, they can then identify those and direct those grants—and it gives us a much wider reach. It also biases it less towards people who we happen to know, which is good.We don't want to just like overfund everyone we know and underfund everyone that we don’t. That's one initiative that I've been pretty excited about that we're going to keep doing. Another thing we've really tried to have a lot of emphasis on making the (application) process smooth and clean. There are pros and cons to this. But it drops the activation energy necessary for someone to decide to apply for a grant and fill out all of the forms. We’ve really tried to bring more people into the fold.Adverse selection in philanthropyDwarkesh Patel 16:41If you make it easy for people to fill out your application and generally fund things that other organizations wouldn't, how do you deal with the possibility of adverse selection in your philanthropic deal flow?Sam Bankman-Fried 16:52It's a really good question. It’s a worry that Bob down the street might see a great book case study that he wants and wonder if he can get funding for this bookcase as it’s going to house a lot of knowledge. Knowledge is good, right? Obviously, we would detect that pretty quickly. The basic answer is that we still vet all of these. We do have oversight of them. But, we also do a deep dive into both all of the large ones, but also into samplings of all the small ones. We do deep dives into randomly sampled subsets of them—which allows us to get a good statistical sense of whether we are facing significant adverse selection in them. So far, we haven't seen obvious signs of it, but we're going to keep doing these analyses and see if anything worrying comes out of those. But that's a way to be able to have more trusted analyses for more scaled-up numbers of grants.Correlation between different causesDwarkesh Patel 18:06A long time ago, you wrote a blog post about how EA causes are multiplicative, instead of additive. Do you still find that's the case with most of the causes you care about? Or are there cases where some of the causes you care about are negatively multiplicative? An example might be economic growth and the speed at which AI takes off.Sam Bankman-Fried 18:24Yeah, I think it’s getting more complicated. Specifically around AI, you have a lot of really complex factors that can point in the same direction or in opposite directions. Especially if what you think matters is something like the relative progress of AI safety research versus AI capabilities research, a lot of things are going to have the same impact on both of those, and thus confusing impact on safety as a whole.I do think it's more complicated now. It's not cleanly things just multiplying with each other. There are lots of cases where you see multiplicative behavior, but there are cases where you don't have that. The conclusion of this is: if you have multiplicative cases, you want to be funding each piece of it. But if you don't, then you want to be trained to identify the most impactful pieces and move those along. Our behavior should be different in those two scenarios.Dwarkesh Patel 19:23If you think of your philanthropy from a portfolio perspective, is correlation good or bad?Sam Bankman-Fried 19:29Expected value is expected value, right? Let's pretend that there is one person in Bangladesh and another one in Mexico. We have two interventions, both 50-50 on saving each of their lives. Suppose there’s some new drug that we could release to combat a neglected disease. This question is asking, “are they correlated?” “Are these two drugs correlated in their efficacy?” And my basic argument is, “it doesn't matter, right?” If you think about it from each of their perspectives, the person in Mexico isn't saying, “I only want to be saved in the cases where the person in Bangladesh is or isn't saved.” That’s not relevant. They want to live.The person in Bangladesh similarly wishes to live. You want to help both of them as much as you can. It's not super relevant whether there’s alignment or anti-alignment between the cases where you get lucky and the ones where you don't.Dwarkesh Patel 20:46What’s the most likely reason that Future Fund fails to live up to your expectations?Sam Bankman-Fried 20:51We get a little lame. We give to a lot of decent things. But all the cooler or more innovative things that we do, don't seem to work very well. We end up giving the same that everyone else is giving. We don’t turn out to be effective at starting new things, we don't turn out to be effective at thinking of new causes or executing them. Hopefully, we'll avoid that. But, it's always a risk.Dwarkesh Patel 21:21Should I think of your charitable giving, as a yearly contribution of a billion dollars? Or should I think of it as a $30 billion hedge against the possibility that there's going to be some existential risk that requires a large pool of liquid wealth?Sam Bankman-Fried 21:36It's a really good question, I'm not sure. We've given away about 100 million so far this year. We're going to start doing that because we think there are really important things to fund and to start scaling up those systems. We notice opportunities as they come and we have systems ready in place to give to them. But it's something we're really actively discussing internally—how concentrated versus diffuse we want that giving to be, and storing up for one very large opportunity versus a mixture of many.Great founders do difficult thingsDwarkesh Patel 22:15When you look at a proposal and think this project could be promising, but this is not the right person to lead it, what is the trait that's most often missing?Sam Bankman-Fried 22:22Super interesting. I am going to ignore the obvious answer which is that the guy is not very good and look at cases where it's someone pretty impressive, but not the right fit for this. There are a few things. One of them is how much are they going to want to deal with really messy s**t. This is a huge thing! When I was working at Jane Street, I had a great time there. One thing I didn’t realize was valuable until I saw the alternative—if I decided that is a good trade to buy one share of Apple stock on NASDAQ, there's a button to do that.If you as a random citizen want to buy one share of Apple stock directly on an exchange, it'll cost you tens of millions of dollars a year to get set up. You have to get a physical colo(cation) in Secaucus, New Jersey, have market data agreements with these companies, think about the sip and about the NBBO and whether you’re even allowed to list on NASDAQ, and then build the technological infrastructure to do it. But all of that comes after you get a bank account.Getting a bank account that's going to work in finance is really hard. I spent hundreds, if not thousands of hours of my life, trying to open bank accounts. One of the things at early Alameda that was really crucial to our ability to make money was having someone very senior spend hours per day in a physical bank branch, manually instructing wire transfers. If we didn't do that, we wouldn't have been able to do the trade.When you start a company, there are enormous amounts of s**t that looks like that. Things that are dumb or annoying or broken or unfair, or not how the world should work. But that’s how the world does work. The only way to be successful is to fight through that. If you're going to be like, “I'm the CEO, I don't do that stuff,” then no one's going to do that at your company. It's not going to get done. You won't have a bank account and you won't be able to operate. One of the biggest traits that are incredibly important for a founder and for an early team at a company (but not important for everything in life) is willing to do a ton of grunt work if it’s important for the company right then.Viewing it not as “low prestige” or “too easy” for you, but as, “This is the important thing. This is a valuable thing to do. So it's what I'm going to do.” That's one of the core traits. The other thing is asking if they’re excited about this idea? Will they actually put their heart and soul into it? Or are they going to be not really into it and half-ass? Those are two things that I really look for.Pitcher fatigue and the importance of focusDwarkesh Patel 25:51How have you used your insights about pitcher fatigue to allocate talent in your companies?Sam Bankman-Fried 25:58Haha. When it comes to pitchers, in baseball, there's a lot of evidence that they get worse over the course of the game. Partially, because it's hard on the arm. But, it's worth noting that the evidence seems to support the claim that it depends on the pitchers. But in general, you're better off breaking up your outings. It's not just a function of how many innings they pitch that season, but also extremely recently. If you could choose between someone throwing six innings every six days, or throwing three innings every three days, you should use the latter. That's going to get the better pitching on average, and just as many innings out of them—and baseball has since then moved very far in that direction. The average number of pitches thrown by starting pitchers has gone down a lot over the last 5-10 years.How do I use that in my company? There’s a metaphor here except this is with computer work instead of physical arm work. You don't have the same effect where your arm is getting sore, your muscles snap, and you need surgery if you pitch too hard for too long. That doesn't directly translate—but there's an equivalent of this with people getting tired and exhausted. But on the other hand, context is a huge, huge piece of being effective. Having all the context in your mind of what's going on, what you're working on, and what the company is doing makes it easier to operate effectively. For instance, if you could have either two half-time employees or one full-time employee, you're way better off with one full-time employee because they're going to have more context than either of the part-time employees would have—thus be able to work way more efficiently.In general, concentrated work is pretty valuable. If you keep breaking up your work, you're never going to do as great of work as if you truly dove into something.How SBF identifies talentDwarkesh Patel 28:30You've talked about how you weigh experience relatively little when you're deciding who to hire. But in a recent Twitter thread, you mentioned that being able to provide mentorship to all the people who you hire is one of the bottlenecks to you being able to scale. Is there a trade-off here where if you don't hire people for experience, you have to give them more mentorship and thus can't scale as fast?Sam Bankman-Fried 28:51It's a good question. To a surprising extent, we found that the experience of the people that we hire has not had much correlation with how much mentorship they need. Much more important is how they think, how good they are at understanding new and different situations, and how hard they try to integrate into their understanding of coding how FTX works. We actually have by and large found that other things are much better predictors of how much oversight and mentorship they’re going to need then.Dwarkesh Patel 29:35How do you assess that short of hiring them for a month and then seeing how they did?Sam Bankman-Fried 29:39It's tough, I don't think we're perfect at it. But things that we look at are, “Do they understand quickly what the goal of a product is? How does that inform how they build it?” When you're looking at developers, I think we want people who can understand what FTX is, how it works, and thus what the right way to architect things would be for that rather than treating it as an abstract engineering problem divorced from the ultimate product.You can ask people like, “Hey, here's a high-level customer experience or customer goal. How would you architect a system to create that?” That’s one thing that we look for. An eagerness to learn and adapt. It's not trivial to ask for that. But you can do some amount of that by giving people novel scenarios and seeing how much they break versus how much they bend. That can be super valuable. Specifically searching for developers who are willing to deal with messy scenarios rather than wanting a pristine world to work in. Our company is customer-facing and has to face some third-party tooling. All those things mean that we have to interface with things that are messy and the way the world is.Why scaling too fast kills companiesDwarkesh Patel 31:09Before you launched FTX, you gave detailed instructions to the existing exchanges about how to improve their system, how to remove clawbacks, and so on. Looking back, they left billions of dollars of value on the table. Why didn't they just fix what you told them to fix?Sam Bankman-Fried 31:22My sense is that it’s part of a larger phenomenon. One piece of this is that they didn't have a lot of market structure experts. They did not have the talent in-house to think really deeply about risk engines. Also, there are cultural barriers between myself and some of them, which meant that they were less inclined than they otherwise would have been to take it very seriously. Ignoring those factors, there's something much bigger at play there. Many of these exchanges had hired a lot of people and they got in very large. You might think they were more capable of doing things with more horsepower. But in practice, most of the time that we see a company grow really fast, really quickly, and get really big in terms of people, it becomes an absolute mess.Internally, there's huge diffusion of responsibility issues. No one's really taking charge. You can't figure out who's supposed to do what. In the end, nothing gets done. You actually start hitting the negative marginal utility of employees pretty quickly. The more people you have, the less total you get done. That happened to a number of them to the point where I sent them these proposals. Where did they go internally? Who knows. The Vice President of Exchange Risk Operations (but not the real one—the fake one operating under some department with an unclear goal and mission) had no idea what to do with it. Eventually, she passes it off to a random friend of hers that was the developer for the mobile app and was like, “You're a computer person, is this right?” They likely said, “I don’t know, I'm not a risk person,” and that's how it died. I’m not saying that’s literally what happened but sounds kinda like that’s probably happened. It's not like they had people who took responsibility and thought, “Wow, this is scary. I should make sure that the best person in the company gets this,” and pass it to the person who thinks about their risk modeling. I don't think that's what happened.The future of cryptoDwarkesh Patel 33:51There're two ways of thinking about the impact of crypto on financial innovation. One is the crypto maximalist view that crypto subsumes tradfi. The other is that you're basically stress-testing some ideas in a volatile, fairly unregulated market that you're actually going to bring to tradfi, but this is not going to lead to some sort of decentralized utopia. Which of these models is more correct? Or is there a third model that you think is the correct one?Sam Bankman-Fried 34:18Who knows exactly what's going to happen? It's going to be path-dependent. If I had to guess I would say that a lot of properties of what is happening crypto today will make their way into Trad Fi to some extent. I think blockchain settlement has a lot of value and can clean up a lot of areas of traditional market structure. Composable applications are super valuable and are going to get more important over time. In some areas of this, it's not clear what's going to happen. When you think about how decentralized ecosystems and regulation intersect, it's a little TBD exactly where that ends up.I don't want to state with extreme confidence exactly what will or won't happen. Stablecoins becoming an important settlement mechanism is pretty likely. Blockchains in general becoming a settlement mechanism, collateral clearing mechanism, and more assets getting tokenized seem likely. There being programs written on blockchains that people can add to that can compose with each other seems pretty likely to me. A lot of other areas of it could go either way.Risk, efficiency, and human discretion in derivativesDwarkesh Patel 35:46Let's talk about your proposal to the CFTC to replace Futures Commission Merchants with algorithmic real-time risk management. There's a worry that without human discretion, you have algorithms that will cause liquidation cascades when they were not necessary. Is there some role for human discretion in these kinds of situations?Sam Bankman-Fried 36:06There is! The way that traditional future market structure works is you have a clearinghouse with a decent amount of manual discretion in it connected to FCMs. Some of which use human discretion, and some of which use automated risk management algorithms with their clients. The smaller the client, the more automated it is. We are inverting that where at the center, you have an automated clearing house. Then, you connect it to FCM, which could use discretionary systems when managing their clients.The key difference here is that one way or another, the initial margin has to end up at the clearinghouse. A programmatic amount of it and the clearinghouse acts in a clear way. The goal of this is to prevent contagion between different intermediaries. Whatever credit decisions one intermediary makes, with respect to their customers, doesn't pose risk to other intermediaries. This is because someone has to post the collateral to the clearinghouse in the end—whether it's the FCM, their customer, or someone else. It gives clear rules of the road and lack of systemic risk spreading throughout the system and contains risk to the parties that choose to take that risk on - to the FCMs that choose to make credit decisions there.There is a potential role for manual judgment. Manual judgment can be valuable and add a lot of economic value. But it can also be very risky when done poorly. In the current system, each FCM is exposed to all of the manual bespoke decisions that each other FCM is making. That's a really scary place to be in, we've seen it blow up. We saw it blow up with LME nickel contracts and with a few very large traders who had positions at a number of different banks that ended up blowing out. So, this provides a level of clarity, oversight, and transparency to this system, so people know what risk they are, or are not taking on.Dwarkesh Patel 38:29Are you replacing that risk with another risk? If there's one exchange that has the most liquidity om futures and there’s one exchange where you're posting all your collateral (across all your positions), then the risk is that that single algorithm the exchange is using will determine when and if liquidation cascades happen?Sam Bankman-Fried 38:47It’s already the case that if you put all of your collateral with a prime broker, whatever that prime broker decides (whether it's an algorithm or a human or something in between) is what happens with all of your collateral. If you're not comfortable with that, you could choose to spread it out between different venues. You could choose to use one venue for some products and another venue for other products. If you don't want to cross-collateralized cross-margin your positions, you get capital efficiency for cross-margining them—for putting them in the same place. But, the downside of that is the risk of one can affect the other. There's a balance there, and I don't think it's a binary thing.Dwarkesh Patel 39:28Given the benefits of cross-margining and the fact that less capital has to be locked up as collateral, is the long-run equilibrium that the single exchange will win? And if that's the case, then, in the long run, there won't be that much competition in derivatives?Sam Bankman-Fried 39:40I don't think we're going to have a single exchange winning. Among other things, there are going to be different decisions made by different exchanges—which will be better or worse for particular situations. One thing that people have brought up is, “How about physical commodities?” Like corn or soy? What would our risk model say about that? It's not super helpful for those commodities right now because it doesn't know how to understand a warehouse. So, you might want to use a different exchange, which had a more bespoke risk model that tried to understand how the human would understand what physical positions someone had on. That would totally make sense. That can cause a split between different exchanges.In addition, we've been talking about the clearing house here, but many exchanges can connect to the same clearinghouse. We're already, as a clearing house, connected to a number of different DCMs and excited for that to grow. In general, there are going to be a lot of people who have different preferences over different details of the system and choose different products based on that. That's how it should work. People should be allowed to choose the option that makes the most sense for them.Jane Street vs FTXDwarkesh Patel 41:00What are the biggest differences in culture between Jane Street and FTX?Sam Bankman-Fried 41:05FTX has much more of a culture of like morphing and taking out a lot of random new s**t. I don’t want to say Jane Street is an ossified place or anything, it’s somewhat nimble. But it is more of a culture of, “We're going to be very good at this particular thing on a timescale of a decade.” There are some cases where that's true of FTX because some things are clearly part of our core business for a decade. But there are other things that we knew nothing about a year ago, and now have to get good at. There's been more adaptation and it's also a much more public-facing and customer-facing business than Jane Street is—which means that there are lots of things like PR that are much more central to what we're doing.Conflict of interest between broker and exchangeDwarkesh Patel 41:56Now in crypto, you're combining the exchange and the broker—they seem to have different incentives. The exchange wants to increase volume, and the broker wants to better manage risk, maybe with less leverage. Do you feel that in the long run, these two can stay in the same entity given the potential conflict of interest?Sam Bankman-Fried 42:13I think so. There's some extent to which they differ, but more that they actually want the same thing—and harmonizing them can be really valuable. One is to provide a great customer experience. When you have two different entities with two completely different businesses but have to go from one to the other, you're going to end up getting the least common denominator of the two as a customer. Everything is going to be supported as poorly as whichever of the two entities support what you're doing most poorly - and that makes it harder. Whereas synchronizing them gives us more ability to provide a great experience.Bahamas and Charter CitiesDwarkesh Patel 42:59How has living in the Bahamas impacted your opinion about the possibility of successful charter cities?Sam Bankman-Fried 43:06It's a good question. It's the first time and it’s updated positively. We've built out a lot of things here that have been impactful. It's made me feel like it is more doable than I previously would have thought. But it's a lot of work. It's a large-scale project if you want to build out a full city—and we haven’t built out a full city yet. We built out some specific pieces of infrastructure that we needed and we've gotten a ton of support from the country. They've been very welcoming, and there are a lot of great things here. This is way less of a project than taking a giant, empty plot of land, and creating a city in it. That's way harder.SBF’s RAM-skewed mindDwarkesh Patel 43:47How has having a RAM-skewed mind influence the culture of FTX and its growth?Sam Bankman-Fried 43:52On the upside, we've been pretty good at adapting and understanding what the important things are at any time. Training ourselves quickly to be good at those even if it looks very different than what we were doing. That's allowed us to focus a lot on the product, regulation, licensing, customer experience, branding, and a bunch of other things. Hopefully, it means that we're able to take whatever situations come up and provide reasonable feedback about them and reasonable thoughts on what to do rather than thinking more rigidly in terms of how previous situations were. On the flip side, I need to have a lot of people around me who will try and remember long-term important things that might get lost day-to-day. As we focus on things that pop up, it's important for me to take time periodically to step back and clear my mind and remember the big picture. What are the most important things for us to be focusing on?Please share if you enjoyed this episode! Helps out a ton! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Transcript
Discussion (0)
Today on the Lunar Society podcast, I'm the pleasure of interviewing Sam Beckman-Fried,
CEO of FTX. Thanks for coming on the Lunar Society.
Thanks for having me.
All right. First question.
Does the consecutive success of FTX and Alameda, does that suggest to you that the world
has all kinds of low-hanging opportunities or was that a property of the inefficiencies of
cryptocurrencies at one particular point in history?
I think it's probably more of the former.
I think there probably just a lot of inefficiencies.
So I guess another part of this question is if you had to restart earning to give again,
what did the odds you'd become a billionaire, but you couldn't do it in crypto?
I think, I mean, they're pretty decent.
Like, a lot of it depends on what I end up choosing and how sort of like aggressive I end up
deciding to be.
You know, there were a lot of pretty safe and secure kind of career paths from, you know,
before me that definitely would not have ended there.
but I think that if I'd sort of, you know, decided to really dedicate myself to starting up some
businesses, there would have been a pretty decent chance of it.
So that leads the next question, which is that you've cited Will McCaskill's lunch with
you while you were at MIT as being very important in deciding your career.
He suggests that you do earning to give by going to a quant firm like Jane Street.
In retrospect, given the success you've had as a founder, was that maybe bad advice?
And maybe you should have advised you to start a startup or a nonprofit?
I mean, I don't think it was literally the best possible advice in that, like, you know,
I mean, that was what, 2012 or something like, you know, think about starting a crypto exchange
would have maybe been a, you know, but I think it was definitely helpful advice.
And I think that, you know, relative to not having gotten advice at all, then, I think it probably
helped quite a bit.
Right.
But then there's a broader question of are people like you who could become founders,
are they advised to take lower variance, lower risk careers that an expected value are
less valuable? Yeah, I think that's probably true. I think probably people are advised too strongly
to go down safe career paths. But I think it's worth noting that, first of all, there's a big
difference between what makes sense altruistically and personally for this. And to the extent
you're just thinking of personal criteria, that's going to argue heavily in favor of a safer career
path because you have much more quickly declining, you know, marginal utility of money than
the world does. So this is sort of like specifically for altruistically.
minded people. The other thing is that, you know, when you think about like where or what is it
that sort of like is advising people to choose a safer route, I think people, you know, will often
try and look to, oh, well, what was the career advice that they got? What was sort of like, you know,
what were sort of these outward facing factors that you can see? But I think often the answer
has to do something with them and their family, or them and, you know, their friends, or something
much more personal. And, you know, when we talk with people about what they're thinking about
doing with their career, you know, personal considerations and the advice of people close to them
weighs really, really heavily on what decisions they end up making. So I didn't realize that
the personal considerations were as important in your case as the advice you got from.
Oh, I don't think in my case, but I think that in,
In the case of many, many people that I talk to, they are.
So speaking of declining marginal consumption,
I'm wondering if you think the implication of this is that over the long term,
all the richest people in the world will be utilitarian philanthropists
because they don't have diminishing returns from consumption.
They're risk neutral.
I mean, I wouldn't say all will,
but I think there probably is something in that direction
where people who are looking at sort of how they can help the world
are going to end up being disproportionately represented
amongst the most and maybe least successful.
All right.
Let's talk about effective altruism.
So in your interview with Tyler Cowan, you were asked, what constrains the number of altruistically minded projects?
And you answered, probably someone who can start something. Now, is this a property of the world in general? Or is this a property of EAs? And if it's about EAs, then what do you think is about, is there something about the movement that drives away people who could take leadership roles?
Oh, I think it's just the world in general. I think, you know, even if you ignore altruistic projects and just look at profit-minded ones, we have lots of ideas for businesses that we think would probably do pretty well, if they're just.
they were run quite well, that we'd be excited to fund.
And the missing ingredient quite frequently for them is the right person or team to take
the lead on it.
And I think that in general, it's kind of brutal starting something.
It's sort of brutal being a founder.
And it requires a somewhat specific but extensive list of skills.
And I think that those things end up making it generally fairly highly in demand.
What would it take to get more of those kinds of people to go into EA?
Yeah, I mean, I think part of it is probably just talking with them about, you know,
have you thought about what you can do for the world?
Have you thought about how you can have impact on the world?
Have you thought about how you can maximize your impact on the world?
And just sort of going down that path, I think a lot would be amenable.
I think a lot would be excited about sort of thinking critically and ambitiously about how they can help the world.
So I think honestly, just engagement is one piece of this.
And then another thing, I think even within people who are, you know, altruistically minded
and looking at like what would it take for them to be more excited to be founders or to be better
at, I think there are still things that you can do. And I think some of this is about empowering
people and some of this is about normalizing the fact that when you start something, it might
fail and that's okay. And that, you know, that's how most startups and especially most very
early stage startups, obviously this sort of changes over time. But that, you know, when you look
at sort of early stage companies, you shouldn't be running them. You shouldn't be trying to build
them to maximize the chances of having at least a little bit of success. But what that means is that
you have to be okay with the personal fallout of failing and that we have to build a community
that is okay with that. And I don't think we have that right now. I think very, very,
few communities do.
Now, there are many good objections to utilitarianism.
As you know, you said yourself that we don't have a good account of infinite ethics.
Should we attribute substantial way to the probability that utilitarianism is wrong, and how
do you hedge for this moral uncertainty in your giving?
So I don't think it has super large impact on my giving, partially because in order to say
you'd have to have sort of a concrete proposal for what else you would do and what that would
imply that would be different, you know, actions-wise.
And I don't know that I've sort of been compelled by many of those.
I do think, though, that there are a lot of things we don't understand right now,
and I think one thing that you pointed to do is infinite ethics.
I think another thing is, and I'm not sure this is quite moral uncertainty.
This might be physical uncertainty more so than anything else.
But, you know, there are a lot of sort of chains of reasoning people will go down
that I think are like somewhat contingent on our current understanding of the universe
in a way which might not be right,
and certainly if you look at like expected value outcomes,
might not be right.
I think, you know, say what you will about like the size of the universe
and what that implies, but like, you know,
some of the same people who make arguments based on,
well, here's how big the universe is,
also, you know, think there's a, you know,
think the simulation hypothesis has decent probability.
But I think very few people sort of chain, you know,
chain through them like, well, okay, what would that imply? I don't think it's clear what any of this
implies. I think in the end, if I had to say, like, how have these considerations changed my thoughts
on what to do? The honest answer is that they have changed it a little bit. And I think the
direction that they pointed me in is things with moderately more robust impact. And what I mean by
that is there's sort of one way that you can, you know, calculate the expected value of
of an action, which is sort of pretty specific and pretty much like, here's what's going
happen.
Here are the two outcomes.
Here are the probabilities of them.
You know, there's another thing you can do, though, which is to try and say, like,
all right, like, it's a little bit more hand-wavy, but it's something like, how much better
is it kind of, you know, going to make the world?
Like, how much does it matter if the world's kind of better in, like, generic diffuse ways?
And I think typically, you know, EA has been pretty skeptical of that second line of reasoning.
And I think correctly because I think that usually when you see that deployed, it's nonsense.
Like usually I think when sort of people are pretty hard to nail down on like what the specific reason is,
they think that something might be good.
It's because they haven't thought that hard about it or don't want to think that hard about it.
and that the much better analyze and vetted pathways are the ones that you should be paying more
attention to. That being said, I do think that sometimes EA gets too narrow-minded and specific
about plotting out sort of like courses of impact. And this is one of the reasons why that people
end up sort of fixating on one particular understanding of the universe, of ethics, of how things
are going to progress, but that, you know, all of these things have some
amount of uncertainty in them and when you jostle that some some sort of like theories of impact
and some models behave somewhat robustly under jostling and some of them completely fall apart
and it become like a little bit more sympathetic to ones that are kind of like a little bit robust
under thoughts about what the world ends up looking like so in the 20 may 22 oregon congressional
election um you gave 12 million dollars to carrick flyn who uh
whose campaign was ultimately unsuccessful.
How have you updated your beliefs about the efficacy of political giving in the aftermath?
Yeah, I mean, you know, it was in the first time that I'd sort of, you know,
given to that scale in a race.
And, you know, it did it because he was, you know, of all the candidates in the cycle,
the most outspoken on the need for more pandemic preparedness and prevention.
You know, he lost, obviously, you know, such as life.
and I think that, you know, in the end, there's some updates.
I think lots of sort of miniature updates on efficacy of various things.
But, you know, also, you know, I never thought that the odds were extremely high.
That he was going to win, it was always going to be an uncertain close race.
There's a limit to how much you can update from a one-time occurrence.
If you, you know, thought the odds were 50-50 and it turns out being, you know, close in one direction or another, right?
There's sort of a maximum of maybe a factor of true update that you have on that.
And so, you know, I think that there were a bunch of sort of micro-updates just on, you know,
specific factors of the race.
But I think on a high level, I don't think it sort of changed my perspective on policy that much.
But does it make you think they're diminishing or possibly negative marginal returns
from one donor giving to a candidate because of the negative PR creates?
At some point, yeah, I think that's probably true.
So continue on the theme of politics.
When is it more effective to give the marginal million dollars to a political campaign or institution to make some change at the government level, you know, like putting in early detection?
Or when is it more effective to just fund it yourself?
It's a good question.
And, you know, part of this is that it's not necessarily mutually exclusive.
But, you know, I think one thing here is looking at what is the scale of the things that need to happen?
How much are things like international cooperation important for it?
When you look at pandemic prevention, you know, we're talking tens of billions of dollars of scale necessary to, you know, start putting this infrastructure in place.
So it's a pretty big scale thing, which is hard to fund, you know, to that level individually.
And it's also something where, you know, we're going to need to have cooperation between different countries on, you know, what their, you know, surveillance for new pathogens look like and on, you know,
vaccine distribution, right? Like if you, you know, if some countries sort of have great distribution
of vaccines and others don't, that's not good. It's both not fair and not equitable to the countries
that end up getting hit hardest, but also in a global pandemic, it's going to spread. And so you need
to have global coverage. And so I think that's another reason that government likely has to be
involved, at least to some extent, in the efforts. Let's talk about future fund. So as you know,
there are already many existing effective altruist organizations that do donations.
What is the reason you thought there was more value in creating a new one? What's your edge?
So, you know, part of it is I just think that there's value in having multiple organizations.
Every organization is going to have its blind spots. And, you know, you can help cover those up if you have a few.
And, you know, if OpenFill didn't exist, maybe we would have created an organization that looks more like OpenFill.
But, you know, there's some extent to which they are covering a lot of what they're looking at, you know,
We're looking at overlapping, but not identical things.
And so I think having that diversity can be valuable, but, you know, pointing to what are the ways in which we sort of intentionally designed it to be a little bit different from existing donors?
One thing that I think I've been really happy about has been the regranting program that we've had.
So we have, you know, a number of people who are experts in various areas who we've basically donated pots to that they can re-grant.
and what are the reasons that we think is valuable?
One thing is just giving more stakeholders, you know, a chance to sort of voice their opinions in a way where
we can't possibly sort of listen to everyone in the world directly and integrate all those
opinions to come up with like the perfect set of answers.
And so distributing it and letting them act semi-autonomously can help with that.
But the other thing is that it really helps with large numbers of smaller grants.
And so, you know, when you think about what, you know, an organization giving away $100 million in a year is thinking about if we divided that up into $25,000 grants, right? Like how many grants would that mean? That would mean, what, like, 4,000 grants, which is a lot of grant to analyze, right? Like, you know, if we want to give real thought to each one of those, we can't do that. But on the flip side,
sometimes the smaller grants are the most impactful per dollar.
And there are a lot of cases where someone really impressive has a really exciting idea for a new foundation,
for a new organization that could do a lot of good for the world and needs $25,000 to get it started, right?
To like rent out a small office, to be able to cover salaries for two employees for the first six months.
Those are the kind of cases where sometimes a pretty small grant can make a huge,
change in the development of what might ultimately become a really impactful organization,
but they're the kind of things that are really hard for our team to evaluate all of,
just given the number of them. But the regranter program gives us a way to do that,
where, you know, instead we have, you know, 10, 50, 100, maybe eventually regrantters who are,
you know, going out and finding a lot of those opportunities close to them, they can then sort of
identify those and direct those grants, and it gives us a much wider reach. And, you know,
also biases it less towards people who we happen to know, which is, which is good. We don't want
just like overfund everyone we happen to know and underfund everyone that we didn't happen to.
So I think that's been sort of one initiative we've had, which I've been pretty excited about.
And, you know, I think we're going to keep doing. And then, you know, I think another thing is
what we've really tried to have a lot of emphasis on making the process smooth and
clean. There are pros and cons to this, but I do think that it sort of like drops the activation
energy necessary for someone to decide to apply for a grant and, you know, fill out all the forms
and things like that. And so we've really tried to bring more people in the fold, you know,
in terms of potential recipients. If you make it easy for people to fill out your application
and it's generally, you're finding things that maybe other organizations wouldn't, how do you
deal with the possibility of address selection in your philanthropic deal flow? It's a really
good question. And of course, that's a worry that, you know, Bob down the street might, like,
you know, see a great book case that he wants and be like, oh, man, I wonder if I can get
funding for this book case. It's going to house, you know, house a lot of knowledge. Knowledge is good,
right? I mean, obviously, we would have not that one, I think we would detect pretty quickly.
And I think the basic answer is that, you know, we still have that on all of these. And so,
you know, we have, you know, we do have oversight of them. But what we also,
do is we do really deep dives into both all of the large ones, but also into sort of
sampling of all the small ones. You know, we do some oversight of all of them, but we will do really deep
dives into randomly sampled subsets of them, which allows us to get a pretty good statistical sense
of whether we are facing significant, you know, adverse selection in them. You know, so far we haven't
seen obvious signs of it, but we're going to keep doing these analyses. And, you know,
you know, see if anything worrying comes out of those.
But that's sort of a way to be able to, you know,
have more trusted analyses for more scaled up numbers of brands.
So a long time ago you were to blog post about how EA causes are multiplicative
instead of additive.
And we just talked about that a little while ago.
Do you still find that that's the case with most of the causes you care about,
or are there cases where some of the causes of all you care about are negatively multiplicative?
Like, an example might be economic growth and the speed at which AI is something.
Yeah, I think it's getting more complicated.
And I think that I'm specifically around AI, you have a lot of really complex factors that point sometimes in the same direction, sometimes in opposite directions.
And I think that especially if what you think matters is something like the relative progress of AI safety research versus AI capabilities research, a lot of things are going to have the same impact on both of those and thus confusing impact on, you know, safety.
you know, as a whole. So I do think it's more complicated now. And I think it's not sort of cleanly
things just multiplying with each other. I do think there are lots of cases where you see multiplicative
behavior, but I also think there are cases where you just don't have that. And that, you know,
sort of the conclusion of this is if you do have multiplicative cases, you probably want to be funding
each piece of it. But if you don't, then you probably want to be trying to identify the most impactful
pieces and specifically moving those along. And so I think, you know, our behavior should be different
in those two scenarios. If you think of your philanthropy from a portfolio perspective,
is correlation good or is it bad?
I mean, like, I don't know, expective value is the expected value, right? And maybe here's,
like, one way to think about this. Let's pretend that there is, you know, one person in Bangladesh
and another one in Mexico. And we have, you know, we have.
you know, one intervention that can, you know, we have two interventions, both 50-50,
on saving each of their lives in particular, right? Some new new drug that we could help,
you know, release to combat some neglected disease. And another question of like, well,
are they correlated? Like, are these two drugs correlated in their efficacy? And my basic argument
is it doesn't matter, right? Because if you think about it from each of their perspective,
right? The person in Mexico isn't saying like, I only want to be saved in the cases where the person in Bangladesh is or isn't saved, right? Like that's not relevant, right? They're like, I would like to live. And the person in Bangladesh similarly says, I would like to live. And, you know, you want to help both of them as much as you can. And it's not super relevant whether, you know, there's alignment or anti-alignment between the cases where you get lucky and the ones where you don't.
What's the most likely reason that future fund fails to live up to your expectations?
I think we just like kind of get a little lame.
Like we give to a lot of decent things, but like all the cooler or like more innovative
things that we do just don't seem to work very well.
And we end up sort of giving the same place, you know, that everyone else is giving where
that ends up being and that, you know, we're not, don't turn out to be effective at starting
new things.
We don't turn out to be effective at thinking of new causes or
executing on them. And, you know, hopefully we'll avoid that, but it's always a risk.
So should I think of your charitable giving as a yearly contribution of a billion dollars
or less or more, or should I think of it as a $30 billion hedge against a possibility
that there's going to be some existential crisis that requires a large full of liquid wealth?
That's a really good question. I'm not sure. You know, we're going to start giving something
we already have. We've given away about $100 million so far this year. And, you know,
we're going to start doing that partially because we think they're really important things to fund,
partially because we want to make sure to start scaling up those systems in that process,
is that we're ready.
And so that, you know, we notice opportunities that they combine.
We have, you know, systems ready in place to give to them.
But I think it's something we're really actively discussing internally,
how concentrated versus diffuse we want that giving to be and, you know,
how much we want to be sort of storing up for one very large opportunity versus how much it's going to be sort of, you know, mixture of many.
When you look at a proposal and you think this project could be promising, but this is not the right person to lead it.
What is the trade that's most often missing?
Super interesting. There is, I'm getting sort of like ignore the obvious answers that which are like the guys just not very good.
Which sure, fine. And maybe look at cases where it's someone who like is pretty impressive,
but like I still think is not the right fit for this.
I think there are a few things.
I think one of them is how much are they going to want to deal with really messy shit?
This is a huge thing.
If you go to work for like, and maybe to give some example, like when I was working at Jane Street is a really great place.
You know, had a great time there.
One thing which I didn't even realize was, you know, valuable there until I saw the alter, you know, saw sort of what things could look like outside was, you know, if I decided that it was.
you know, if I decided that it was a good trade to buy one share of Apple stock on NASDAQ,
I, you know, there's like a button to do that, right?
If you as a random, you know, citizen want to buy one chair of Apple stock directly on an exchange,
it'll cost you tens of millions of dollars in a year to get set up to be able to do that.
Like you've got to get like a physical Kolo maybe, like in a, you know, it's a Caucasus, New Jersey.
Like, like, you know, you have to like have market data agreements with these companies.
You have to think about the SIP and about the NBBO and whether you even allowed to lift on NASDAQ right then.
You have to build technological infrastructure to do it.
But all of that comes after you get a bank account.
Let's even talk about that stuff.
Getting a bank account that's going to work in finance is really hard.
I've probably spent hundreds, if not thousands of hours from my life trying to open bank accounts.
And, you know, one of the things at early Alameda that was really crucial to our ability to make money
was having someone very senior spend hours per day in a physical bank branch, manually instructing wire transfers.
And if we didn't do that, we wouldn't have been able to do the trade.
And when you start a company, there's enormous amounts of shit that looks like that.
Things that are like dumb or annoying or broken or unfair or not how the world should work.
But it's how the world does work.
And the only way to be successful is to do it, is to fight through that.
And if you're going to be like, whatever, like, I'm the CEO.
I don't do that stuff, right?
Then no one's going to do that at your company.
It's not going to get done.
You won't have a bank account and you won't be able to operate.
So one of the biggest traits that I think is incredibly important for a founder,
and for like an early team at a company, but that is not necessarily important for everything
that you might want to do in life, is being willing to do a ton of grant work if that's what's
important for the company right then and viewing it not as like low prestige or like too
easy for you or something like that, but as whatever, this is the important thing.
This is the valuable thing to do.
So it's what I'm going to do.
That's one of the, I think, core traits.
And the other one is like, are they?
excited about this idea? Will they actually put their heart and soul into it? Or are they kind of
going to be a little bit drifting and bored and not really into it and half asset? I think
those are two things that I really look for. How would you use your insights about pitcher fatigue
to allocate talent in your companies? So pitcher fatigue is, I haven't thought about this in a while,
But my thesis back then, which I still think is probably true, is that when it comes to pitchers in baseball, there's a lot of evidence that they get worse over the course of the game.
Just the more endings they pitch, like they get worse and worse and worse.
Partially it's just like it's hard on the arm.
But it's worth noting that the evidence seems to support the claim that it depends on the pitchers, but that in general, you're better off breaking up their outings.
that it's not just a function of how many innings they've pitched that season,
but also extremely recently.
And so if you could choose between someone throwing six innings every six days
or throwing three innings every three days,
probably you should choose the latter.
Probably that's going to get the better pitching on average and just as many innings out of them.
And Fort Worth Baseball actually has since then moved very far in that direction.
Like an average number of pitches thrown by starting pitchers down a lot,
over the last five, ten years.
How do I use that in my company?
There's a metaphor here, but I actually think I've gone the opposite direction, if anything.
And here's sort of what my sense has been in terms of, you know,
computer work instead of like, you know, arm, like physical work,
is that you don't have the same effect whereby like,
you know, your arm is getting sore and eventually,
your muscle snaps and you need surgery.
If you pitch too hard for too, like that sort of like doesn't directly translate.
There's a little bit of an equivalent of this of people getting tired, right, and exhausted.
But on the other hand, context is a huge, huge piece of being effective.
Having all the context in your mind of what's going on, of what you're working on,
what the company's doing makes it way easier to operate effectively.
And if you could, for instance, have two half-time employees or one full-time employee,
you're way better off with one full-time employee
because they're going to have way more context
than either of the part-time employees would have
and thus be able to work way more efficiently.
And so in general, I think our experience has actually been
that concentrated work is pretty valuable
and that like if you keep breaking up your work,
and whatever it depends on the person, the context,
but like in general, if you do that,
you're never going to be able to do as great of work
as if you really dove into something.
So you've talked about how you,
they experience relatively little when you're deciding who to hire.
But in a recent Twitter that, you mentioned that mentorship is, or being able to provide
mentorship to all the people who come on, that's one of the bottlenecks to you on a scale.
Is there a tradeoff here where if you don't, if you don't hire people for experience
and you know, give them more mentorship and thus can't scale us fast?
It's a good question. But to a surprising extent, we found that the experience of the people
that we hire has not that much correlation with how much mentorship they need.
that much more important is how they think, how good they are at understanding new and different situations,
and how good they are and how hard they try to integrate into sort of their understanding of,
let's say, coding, their understanding of how FTX works. And so I think that we actually have,
by and large, found that, like, other things are much better predictors of how much, you know,
oversight and management and mentorship someone is going to need than,
their experience at sort of similar looking roles.
And how do you assess that, short of hiring them for them for a month and then seeing how they did?
It's tough. I don't think we're perfect at it. But things that we look at, you know,
do they understand quickly what the goal of a product is? And how does that inform how they
build it? You know, when you're looking at developers, I think we really strongly want people
who can understand what FTCS is, how it works, and thus what the right way to architect
things would be for that, rather than sort of like creating it as like an abstract engineering
problem divorced from whatever the ultimate product is. So being able to, and that's something that you,
you can try and ask people like, here's like a high level customer experience or customer goal,
right? How would you architect a system to create that? So that's one thing that we look for.
Just an eagerness to learn and to, you know, adapt. It's not trivial to test for that, but you
can do some amount of that. You can try and give people sort of novel scenarios and see how much they
break versus how much they bend. I think that can be super valuable as well. And, you know,
also kind of like specifically searching for developers who are, you know, willing to deal with messy
scenarios rather than wanting sort of a pristine world to work in because our company,
it's customer facing, has to face some third-party tooling.
we've been a quickly growing company, all of those things mean that, you know, we have to
interface with things that are messy in the way the world is.
Now, before you launched FTCS, you gave detailed instructions to the existing exchanges
about how to improve their system, how to remove clawbacks and so on. Looking back,
they left billions of dollars of value on the table. Why do you think that was? Why didn't
they just fix what you told them to fix? Yeah. It's a really interesting question.
And my sense is that it's part of a larger phenomenon where, it's right way to put it.
Like, so, okay, one piece of this is just like they didn't have a lot of market structure experts.
Like, they just did not have the talent in-house to be able to, like, think really well and deeply about risk engines.
And also there are cultural barriers between, you know, myself and some of them, which I think probably,
meant that they, you know, were less inclined than they otherwise would have been to sort of take
it very seriously. But ignoring those factors, I think there's something much bigger at play there,
where, you know, many of these exchanges had hired a lot of people. They'd gotten very large.
And you might think that that meant that they got more able to do things because they had,
you know, more sort of like horsepower. But in practice, most of the times that we see a company
grow really fast, really quickly, and get really big in terms of number of people, it becomes
an absolute mess internally. There's huge diffusion of responsibility issues. No one's really taking
charge. You can't figure out who's supposed to do what. And in the end, nothing gets done.
And you actually start hitting negative marginal utility of employees pretty quickly,
where the more people you have, the less total you get done. I think that happened to a number of
them to the point where like, yeah, I sent them these proposals. Where did they go internally? Who
knows? You know, the like, you know, vice president of exchange risk operations, but not the real one,
the sort of fake one operating under some department with an unclear goal and mission or something
like that who like had no idea what to do with it. And eventually just sort of like passed it off to a
random friend of hers that she knew who was a developer for the mobile app and was like,
you're a computer person.
Is this right?
And it's sort of like, I have no idea.
I'm not a risk person.
And that's how it died.
And I'm not saying literally that happened, but something sounds kind of like that probably
happened where it's just like, it's not like they had like, you know, people who took
responsibility.
They saw this like, wow, this is scary.
I should make sure that the best person in the company gets this and pass it to the TTO and
like person who thinks about their modeling.
and said, like, hey, is this thing scary?
And they looked at it and they're like, wow, this might be a problem.
I don't think that's what happened.
Now, there's two ways of thinking about the impact of crypto on financial innovation.
One is the crypto-maximaxima-foyu that crypto subsumes stratify.
The other is that what you're basically doing is you're stress testing some ideas from
in a volatile, fairly unregulated market that you're actually going to bring to Tradfi,
but this is not going to lead to some sort of decentralized utopia.
So which of these models is more correct?
Or is there a third model that you think is the correct way to say about this?
So first of all, who knows?
Right.
Like, I mean, you know, who knows exactly what's going to happen?
It's going to be path dependent.
But, you know, if I had to guess, I would say a lot of properties of what is happening
in Crypto today will probably make their way into Tradify to some extent.
I think blockchain settlement has a lot of value and can clean up a lot of areas of traditional
market structure.
And I think that, you know, composable applications are super value.
and are going to get more important over time, I think there are some areas of this
where it's not clear what's going to happen.
And I think that when you think about how do decentralized ecosystems and regulation intersect,
it's a little bit TBD exactly where that ends up.
And so I don't want to state with extreme confidence exactly what will or won't happen.
But I think some piece of this seem pretty likely to me.
I think stable coins becoming an important settlement mechanism is pretty likely.
And I think blockchains in general becoming a settlement mechanism and, you know, collateral clearing
mechanism seems decently likely to me. And more and more assets getting tokenized seems
decently likely to me. And, you know, there being programs written on blockchains that,
you know, people can add to that can compose with each other. Seems pretty likely to me.
and a lot of other areas of it, I think, could go either way.
Let's talk about your proposal to the CFTC to replace futures commission merchants with algorithmic real-time risk management.
There's a worry that without human discretion, you have algorithms that will be, that will cause liquidation cascades when they were not necessary.
Is there some role for human discretion in these kinds of situations?
There is, and the way I think about it is you have, you know, the way that,
traditional future market structure works is you have a clearinghouse with a decent amount of
manual discretion in it connected to FCMs, some of which use human discretion and some of which
use automated risk management algorithms with their clients. And generally, the smaller the client
the more automated it is. We are inverting that to some extent where at the center you have
an automated clearinghouse, then connected to, you know, potentially connected to FCM,
which could potentially use, you know, discretionary systems when managing their clients.
The key difference here is that one way or another, initial margin has to end up at the clearinghouse,
a programmatic amount of it, and the clearinghouse acts in a clear way.
And the goal of this is, first of all, to prevent contagion between different intermediaries.
So whatever decisions, whatever credit decisions, one intermediary makes with respect to their customers,
doesn't pose risk to other intermediaries because someone has to pose the clatterals of the clearinghouse in the end,
whether it's the FCM, their customer, or someone else. And so it gives clear rules of the road and lack of sort of systemic risk spreading throughout the system and contains risk to the parties that choose to take that risk on.
you know, it's the FCMs that choose to, you know, make credit decisions there. So I think that, you know,
there is a potential role for manual judgment. But, you know, manual judgment, it can be really
valuable and add a lot of economic value. You can also be very risky when done poorly. And I think
that, you know, in the current system, each FCM is exposed to all of the manual bespoke decisions
that each other FCM is making. And that's a lot of the manual.
a really scary place to be in. We've seen it blow up. We saw it blow up with Elmi-Nickel contracts,
you know, and we saw it blow up in other cases, you know, with a few very large traders
who had positions on at a number of different banks and, you know, ended up blowing out.
So I think that this provides a level of clarity and oversight and transparency to the system
so that people know what risk they are or are not taking on.
Are you replacing that risk with another risk, which is that if there's one exchange that has the
most liquidity in futures, and if there's one exchange where you're posting all your collateral,
so across all your positions, then the risk is that that single algorithm that the exchange is using
is going to determine when and if liquidation cascades happen.
So it's already the case that, you know, if you put all of your collateral with a prime broker,
then, you know, potentially whatever that prime broker decides, whether it's an algorithm or a human
or something in between is going to decide what happens with all of your collateral.
And if you're not comfortable with that, you could choose to spread it out between different venues.
You could choose to use one venue for some products, another venue for other products.
If you don't want to cross-collateralize cross-margin your positions, you get capital efficiency
generally for cross-margining them, you know, for putting them in the same place.
But the downside of that is that, you know, the risk of one can affect the other one.
There's a balance there.
And, you know, I don't think it's a binary thing.
Okay. But given the benefits of cross-margining and the fact that less capital has to be locked up as collateral, is the long-run equilibrium that the single exchange will win? And if that's the case, then in the long run, there won't be that much competition in duratives? I don't think it, I mean, you already could see that happening. You haven't, and I don't think we're going to have a single exchange winning. Among other things, I think, you know, there are going to be different decisions made by different exchanges, which will, you know, be better or worse for particular situations. And I think, you know, one thing that people have brought up is, well, how
but for physical commodities, you know, like corn or soy, you know, what, what, like, what would
our risk model say about that? And the answer is it's not super helpful for those commodities
right now because it doesn't know how to understand a warehouse. And so, you know, you might want
to use a different exchange, which had a more bespoke risk model that, you know, tried to understand,
you know, have a human understand what physical positions, you know, someone had on. I think that
would totally make sense, and that can cause a sort of split between different exchanges.
In addition, we've been talking about the clearinghouse here, but many exchanges can connect to the
same clearinghouse. And, you know, we're already as a clearinghouse connected to a number of different
DCMs, and so excited for that to continue to grow out. And, you know, in general, there are going to
be a lot of people who have different preferences over sort of different details of the system
and, you know, choose different products based on that. I think that's how it should work
and that, you know, people should be allowed to choose the option that makes the most sense for them.
What are the biggest differences in culture between Jane Street and FTX?
I think, you know, FTX has much more of a culture of like, you know, morphing and taking
on a lot of random new shit. And Jane Street has, it's still like, I don't want to say it's like
an ossified place or anything. Like it is somewhat nimble, but it is more of a culture of like,
you know, we're going to be very good at this particular thing on a time scale a decade.
And there are some cases where that's true with FTX, because some things are clearly
part of our, you know, core business for a decade. But there are other things that like, you know,
we knew nothing about a year ago and all of a sudden we have to get good at. And so I think that
there's, you know, been more adaptation and it's also, it's also.
a much more public-facing and customer-facing business than Jane Street is, which means that
there are lots of things like PR that are much more sort of central to what we're doing.
Now, in crypto, you're combining the exchange and the broker.
They seem to have different incentives.
The exchange wants to increase volume.
The broker wants to better manage risk, maybe with less leverage.
Do you feel that in the long run, these two can stay in the same entity, given the conflict
of interest or potential conflict of interest?
I think so.
And I think that there's like some extent to which they differ, but there are, I think, more
to sense to which they actually want the same thing.
And harmonizing them can be really valuable.
And one is to provide a great customer experience.
And when you have two different entities with two completely different businesses,
but that half, every order has to go from one to the other, right?
You're going to end up getting sort of like the least common denominator of the two as a customer.
You're going to get only things that are, everything is going to be supported.
as poorly as whichever of the two entities support what you're doing most poorly.
And that makes it harder.
Whereas by synchronizing them, it gives us much more ability to provide a great experience on that.
How is living in the Bahamas impacted your opinion about the possibility of successful charter cities?
It's a good question.
I think it's the first time, you know, I think it's updated positively a little bit.
I think, you know, we've built out a lot of things here.
And that's been hopefully impactful.
And I think, you know, it's made me feel like it is more doable than I previously would have thought.
But it's also, it's a lot of work.
Like, you know, it's a large scale project if you want to actually.
And we have not built out a full city.
Like we've built out some specific pieces of infrastructure that we needed.
We've gotten a ton of support from the country.
And, you know, they've been very welcoming.
And there are a lot of great things here.
And so this is way less of a project than,
just taking in a giant empty plot of land and creating a city in it.
That's way harder.
How is having a ram-stued mind influenced the culture of FTX and its growth?
It's a good question.
And I think that what it means on the upside is that we've been sort of like pretty good at adapting
and pretty good at understanding what the important things are at any time.
And that, you know, training ourselves quickly to be good at those.
You know, even if it looks very different than what we're,
doing before. And I think that, you know, that's allowed us to, you know, focus a lot on product,
to focus a lot on regulation and licensing, to focus a lot on customer experience, on branding,
and a bunch of other things. And I think hopefully it means that we're able to sort of like
take whatever situations come up and provide sort of like reasonable feedback about them and
reasonable thoughts on what to do, you know, rather than sort of like thinking more rigidly in
terms of how, you know, previous situations were. On the flip side, you know, I think that it means
that, you know, I have to have a lot of people around me who will try and remember what the most,
you know, what the sort of like long-term important things are that might get lost day to day.
You know, as we focus on, you know, things that pop up. And, you know, it's important for me to
take time periodically, to step back and, you know, clear my mind a little bit and just think like,
all right let's try and just remember what the big picture is here what are the most important
things for us to be focusing on
