Theories of Everything with Curt Jaimungal - Emily Riehl Makes Infinity Categories Elementary

Episode Date: April 6, 2026

Emily Riehl, one of the world’s leading category theorists, shares her vision for making infinity category theory something undergrads can actually learn. In this talk, she breaks down how rethinkin...g the foundations of math could change the way it’s taught and understood—and why it might redefine what math even is. I subscribe to The Economist for their science and tech coverage. As a TOE listener, get 35% off! No other podcast has this: https://economist.com/TOE Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Links: •⁠ ⁠Emily’s profile: https://emilyriehl.github.io/ •⁠ ⁠Emily’s presentation: https://emilyriehl.github.io/files/undergraduates-TOE.pdf •⁠ ⁠A Type Theory For Synthetic ∞-Categories (paper): https://arxiv.org/pdf/1705.07442 •⁠ ⁠Could ∞-Category Theory Be Taught To Undergraduates? (paper): https://arxiv.org/pdf/2302.07855 •⁠ ⁠RZK proof assistant: https://rzk-lang.github.io/rzk/en/latest/ •⁠ ⁠Lean Zulip chat: https://leanprover.zulipchat.com/#recentnews Timestamps: 00:00 A Dream for the Future 01:55 Exploring Infinity Categories 03:54 The Role of Category Theory 10:17 Key Concepts of Category Theory 12:01 The Curry-Howard Correspondence 15:37 Understanding Left Adjoint Functors 24:38 The Innate Lemma Explained 38:29 Proving the Isomorphism 41:50 The Importance of Abstraction 44:04 A Crash Course in Category Theory 44:17 Introduction to Infinity Category Theory 56:27 Fundamental Infinity Groupoids 1:03:34 What Are Infinity Categories? 1:09:12 The Case for Infinity Categories 1:18:12 Transitioning to Homotopy Type Theory 1:22:49 Crash Course in Homotopy Type Theory 1:30:56 Type Constructors Explained 1:34:19 Propositions as Types 1:42:50 Understanding Dependent Types 1:49:01 Identity Types and Their Importance 1:54:03 The Structure of Infinity Groupoids 1:59:33 Hierarchies of Types 2:06:19 The Univalence Axiom 2:08:36 Transitioning to Infinity Category Theory 2:10:04 Simplicial Type Theory Overview 2:14:56 Pre-Infinity Categories Defined 2:24:32 Isomorphisms in Infinity Categories 2:31:48 Computer Formalization in Mathematics 2:40:02 Conclusion and Future Directions Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 I remember the doubt before launching this podcast. What if no one listens? What if I'm wasting my time? If you've ever felt that way about starting a business, Shopify is the partner that turns uncertainty into momentum. They power millions of businesses and 10% of all U.S. e-commerce, from all birds to gym sharks to brands just getting started. No straggler left behind.
Starting point is 00:00:22 Shopify's AI tool writes your product descriptions for you. It enhances your photography. It builds you a stunning store from hundreds of templates. Forget about the dormative haze of bouncing between separate platforms. Shopify puts inventory payments and analytics under one roof with the propriety of a true commerce expert. Their award-winning 24-7 support means you're never alone. And that iconic purple shop pay button, it's the backbone of their checkout, the best converting on the planet, turning abandoned carts into actual sales. It's time to turn those what if,
Starting point is 00:01:00 into with Shopify today. Sign up for your $1 per month trial at Shopify.com slash tow. That's Shopify.com slash T-O-E. This talk was kind of a dream. I'll sort of state my dream for the future. If the foundations of mathematics had some sort of higher structure with something like homotopy type theory, then we could teach infinity category theory to undergraduates,
Starting point is 00:01:28 much like we teach something like abstract al-a-oh. to undergraduates today. Today's talk is being given for the first time, so you're in for a huge treat. Professor Emily Rio, an award-winning mathematician and professor at John Hopkins University, is a proficient explicator and one of the world's most renowned category theorists. The question explored today is, how would you teach infinity categories? So a topic that's notoriously thorny, even for experts in adjacent fields of math, to undergraduates. In exploring this, we cover not only an overview of regular category theory, so don't worry,
Starting point is 00:02:05 but also how homotopy type theory provides a constitutional new language that could fundamentally transform how mathematics is understood, taught, and applied. We also cover the Curry-Howard correspondence, identity types, the Univalous Axiom, and what the heck infinity groupboids are. My name is Kurt Jymongel, and on this channel I investigate theories of everything, primarily from a theoretical physics background, as that's my training from the University of Toronto, but also from a mathematics and philosophy perspective. Today I'm excited and honored that Emily Real has perfected this talk, revamping it from
Starting point is 00:02:38 previous lectures on infinity categories for undergraduates, so you're seeing the latest version of it being delivered here for the first time on theories of everything. If you have questions for the professor, make note of them, and feel free to subscribe, because we'll be doing a one-on-one podcast at some point in the future in a more Q&A format. Professor, welcome to theories of everything. I've been wanting to have you on for years and the audience as well, so it's a pleasure to finally be speaking with you on air. Thank you.
Starting point is 00:03:04 It's a real honor to be here. So please give an overview as to what you're going to talk about today. Great. So I'm going to give a bit of a speculative talk about a dream that I have for the future, which is essentially that my specific research area, infinite dimensional category theory will be easier to explain and in fact so easy that it's routinely taught to undergraduates around the world just as part of their standard mathematics education. All right. Well, let's get to it.
Starting point is 00:03:38 Great. So before I get into infinite dimensional category theory, I want to back up and explain sort of what category theory is, what it's for, the role it plays within mathematics. I'm a mathematician, so I'm going to focus on. applications to mathematics rather than to the real world, if you'll forgive me that. Great. And I thought I would start by telling a story about Galois. So, you know, a very old problem in mathematics was to characterize solutions to polynomial equations.
Starting point is 00:04:13 So the sort of equation you might be thinking of here is something like aX squared plus BX plus C equals zero, where A, B and C are. integers. And we're looking for numbers X that satisfy the equation AX squared plus BX plus C is equal to zero. And you might have learned at some point the quadratic formula, which gives a formula for the roots X when they exist. And once the quadratic equations were solved, it's natural to ask about more general equations with higher degree polynomials, cubic and quartic and quintic, and that problem proved to be a lot harder, and it was eventually determined that in complete generality, it's not always possible to give a formula for the
Starting point is 00:05:04 solutions to an equation given by polynomials with integer coefficients that involved only certain operations, so taking nth roots or using addition, subtraction, multiplication, and division. And sort of the definitive solution to this problem was worked out by Everest Galois in 1832, sort of shortly before, or maybe in the few years preceding his untimely death in a duel. But what's interesting about this is, even though the problem was understood to be interesting right away, the solution was not immediately understood. He submitted a paper for a publication in 1830, which was rejected. And his manuscript, script remained unpublished until 1846 when Louisville added some explanations and made sure that it got
Starting point is 00:05:52 published. You know, Louisville was one of the great mathematicians of the day, but even he sort of missed the point of Galois's solution to this problem, something that is now called Galois Theory, which has a key observation that the solutions to a polynomial equation define a group. or there's a sort of group that describes the sort of permutations or the of the solutions to polynomial equation. So the best mathematicians, and I don't mean to pick on Louisville here. I mean, it seemed like basically nobody understood the point of Gawa theory until, you know, a few decades later. So, you know, the best mathematicians of the day, you know, couldn't understand the ideas here that are now regularly taught to undergraduates.
Starting point is 00:06:41 all around the world. And I think this story illustrates the moral, which sort of the starting point for this talk is that the reason that we are now able to teach Galois theory as part of a course in abstract algebra to hundreds of thousands of students is new abstractions, even though they're sort of initially unfamiliar, take you sort of further away from real world experience with sort of numbers and shapes, they can make complicated mathematics easier to understand. Hi, Kurt here. If you're enjoying this conversation,
Starting point is 00:07:19 please take a second to like and to share this video with someone who may appreciate it. It actually makes a difference in getting these ideas out there. Subscribe, of course. Thank you. So specifically, so this is really how we're going to get into category theory. So what is category theory for? So I want to start by mentioning several results from different areas of mathematics.
Starting point is 00:07:46 So the first result says that if I have a function between sets, so F is a function from A to B, then I can think about the inverse image function. That's something that operates on the power sets or the set of all subsets of B and A. So the inverse image takes a subset of B to a subset of A. And this operation preserves both unions and intersections. So if I have two subsets of B and I take their union and then I take the inverse image, it's the same as first taking the inverse images and then taking the union. And similarly for intersections. So that's a fact about inverse images. Well, there's a corresponding fact about the direct image function, which goes from subsets of A to subsets of B. This is the one that just sort of applies the function to each element in the set and gets a new subset. And this operation preserves unions if I take two sets in A and I take their union and then I take their image. It's the same as taking the image and then taking the union, but for intersections, that no longer works. So this is a kind of fact from set theory that you may or may not have encountered.
Starting point is 00:08:48 And then a fact from linear algebra that you may or may not have encountered is if I have vector spaces, I'll call them U, V and W. You know, maybe this is like R, N, R, M, RK, you know, real Euclidean vector spaces or abstract vector spaces. Then I could take the tensor product of you with the direct sum of V and W, and that's a vector space. And it's linearly isomorphic, meaning it's the same vector space as the one that I get by taking the tensor product of U and V and the tensor product of U and W, and then taking the direct sum. So that's a linear isomorphism of vector spaces. In particular, they have the same dimension. There's also a sort of natural correspondence between basis elements. So this is a fact you might have encountered. And I'll mention one more.
Starting point is 00:09:34 So if you've taken a course on abstract algebra, you might learn of a construction called the free group on a set. And if I take two sets, X and Y, and I form their disjoint union, so keeping the element separate, and then I take the free group, that turns out to be the same free group as you would construct as something called the free product operation of the free group on X and the free group on Y. So again, that's an isomorphism between groups that are constructed in two different ways. So these are, you know, three different results from three different areas of mathematics. And I could, I could, you know, if we had more time, I could add more results to this list, results from topology, from other things, from analysis. But the reason I want to highlight them is all of these results have the same proof and are true for the same reason. This is sort of what category theory does to mathematics. Sometimes I've described it as taking a bird's eye view on mathematics. It introduces a much. more abstract language that will help us isolate the sort of the core content, the core structure that's in common between these disparate results and allows us to give one unified proof that explains everything. So it's sort of abstraction, it's generalization. To some point of view, it's, I think, a clarification by sort of distilling sort of away from the specifics.
Starting point is 00:11:02 to, you know, sort of general patterns. And the result in this case is something called left adjoint functors preserve co-products. This is a special case of a theorem that those category theorists will know as left adjoint functors preserve co-limates. But what I want to do now is we'll firstly look at one of these proofs, and then I will abstract it away by introducing some of the categorical language that allows us to prove not just that one theorem, but all of the theorems in general. Wonderful.
Starting point is 00:11:35 So at this point, or perhaps this is going to be addressed in a subsequent slide, is it important for the audience to know what a left adjoint is and a co-product? Yeah, I'm going to give those definitions. That's all to come. Great. Okay, so first, let's just start by giving a proof of this theorem. And this proof will be in the style of category theory. So this is the same proof that I will give again using more abstract language. But let's just focus on the case of the vector spaces, UV and W.
Starting point is 00:12:00 So what I'm explaining here is that the vector space constructed by taking the tensor product of U with the direct sum of B and W is the same as the vector space if I first take the tensor product of U and B and U and W and then take the direct sum. Okay, so why is this true? And this is something that works not just for finite dimensional vector spaces, but for infinite dimensional ones as well. So you might have an argument in mind for finite dimensional vector spaces, but we need a better argument. And the starting point is really the fundamental theorem of category theory. it's something called the innate dilemma. And what it tells us in this case is that to construct a linear isomorphism between two vector spaces, what it suffices instead is to think about linear maps out of each of the vector
Starting point is 00:12:43 spaces individually and into another vector space X. So on the left-hand side, we're thinking about linear maps from U-tensor v plus W to X. And on the right-hand side, we're thinking about linear maps from U-tensor v direct some U-tensor-W to X. So we have two different collections of linear maps. And if I can show that these are in a one-to-one correspondence, so that there's an bijection between the set on the left and the set on the right, that is moreover natural. And natural is a technical term in category theory, but it also sort of means what it means intuitively. So I won't give a definition of this, but essentially it means that you're not making arbitrary choices. It feels like you're just giving a direct construction. This thing would work for any X if you had swap out a different vector space, why it would be the same construction. That's sort of what natural means. And anyway, if we had such a thing, then we actually get an isomorphism between U-tensor v-derexum-W and U-tensor v-direct some U-tensor-W. And why is that? It's if we take as a special case, X being the left-hand vector space, so U-tensor v-derex sum-W, then on the left-hand side I have
Starting point is 00:13:52 the identity map from U-tensor v-direct sub-W to itself. And when I, pass it through this correspondence, now I get a linear map from U-tensor v direct-U-tensor-W to U-tensor-B direct-w, so to have an isomorphism, I need linear maps in both direction. And similarly, I could start with the right-hand vector space and take the identity function and pass it the other way through this natural correspondence. And these two functions turn out to be inverses, so they give an explicit linear isomorphism. So this is a sort of very subtle idea I think this is the first kind of really profound insight of category theory. It's something called the Yaneda Lama, which we'll see again in a more abstract eyes.
Starting point is 00:14:36 But using that, now I can give a direct construction. So I've reduced the problem of giving an explicit linear isomorphism between two vector spaces to showing that they have the same sets of linear maps out of them to an arbitrary vector space X. And what I'm displaying here is a sort of series of reasonings that show that some collection of linear maps corresponds bijectively to some other collection of linear maps corresponds bijectively to some other collection of linear maps. So if we start with a linear map from the tensor product, so U-Tensor v-direct some W to X, then there's an operation that you might know of as curing if you've got a background in computer science that says. that this is the same as having a linear map from the direct sum v, direct sum, W, to the vector space of linear maps from U to X. So there's an construction called the vector space of linear maps. And if I map out of a tensor product, it's the same as mapping into the vector space
Starting point is 00:15:41 of linear maps. But now I have a linear map out of a direct sum v, direct sum W. And a defining property of the direct sum is that if I want to define a linear map out of a direct sum, it suffices to do it in sort of two cases. On the one hand, I can think about linear maps from V to the vector space of linear maps from U to X, or separately think about linear maps from U to X, and if I pair those two maps together, I would get a linear map out of the direct sum. So direct sum linear maps decompose in this way and pair back together in this way. And so now I have two different linear maps into a vector space of linear maps from U to X, and by uncurring, those provide two linear maps, one from U direct sum v into X,
Starting point is 00:16:30 and the other from U directsum W into X. And then I can pair those back together and get a linear map out of the direct sum, from U-tensor v, direct sum U-tensor W into X. And by composing all these byjections, I get a natural bijection from the first set of linear maps to the final set of linear maps, and that by the innate a lemma gives us the isomorphism. So this might not have been the proof that you were expecting, but this is a sort of very categorical way of approaching a problem of constructing an explicit isomorphism like this. Okay, so let's press on. So where's the category theory here? So firstly, category is a technical term in category theory, and I'm going to isolate all the abstract core of the proof that we just gave. And the first thing I want to explain is that category theory is actually a technical term in category theory. So in the argument we were just discussing, we were thinking about vector spaces and linear maps between those, and those define an example of a category. So what is a
Starting point is 00:17:32 category? It's something that's given by a collection of objects. In this case, these were the vector spaces, UV and W. And then a collection of arrows. These are the linear maps. So an arrow always has a source and target, so we might have a linear map F that goes from U to V, a linear map G, that goes from v to w. And then each object needs to have a specified identity arrow. So the vector space V has a special linear map from V to V. That's called the identity. This is the sort of linear function that sends each vector to itself.
Starting point is 00:18:07 And then each composable pair of arrows, that's a linear map from U to V and a linear map from V to W, will have a specified composite arrow. So one of the important properties of linear maps is that they compose. If you compose the functions, that's again a linear map. This will give us a linear map from U to W. And then there's some axioms about the composition and identity. So composition is associative, meaning if you compose three linear maps, that's sort of well-defined and unital, meaning if you compose with an identity, it's as if you didn't compose anything at all. And that
Starting point is 00:18:39 defines a category. So that's sort of the general setting within which this argument took place. and the statement referred to a general categorical notion called isomorphism. So two objects in a category. So v and W, for instance, in the category of vector spaces, are called isomorphic if there exist arrows in each direction, so an arrow from v to W and an arrow from W to V, so that their composites are equal to the identities. So when I first do F and then do G, that's the identity on V, and then. And when I first do G and then do F, that's the identity on W.
Starting point is 00:19:20 So that's the notion of linear isomorphism that I was referring to there. I have linear maps in both directions between some vector spaces, and when I compose the linear maps in either direction, I get the identity linear map. Would this statement that G of F equals the identity on V be different than the statement G of F is isomorphic to the identity on B? As you know, on theories of everything, we delve into some of the most reality spiraling concepts from theoretical physics and consciousness to AI and emerging technologies, to stay informed
Starting point is 00:19:55 in an ever-evolving landscape, I see The Economist as a wellspring of insightful analysis and in-depth reporting on the various topics we explore here and beyond. The Economist's commitment to rigorous journalism means you get a clear picture of the world's most significant developments, whether it's in scientific innovation or the shifting tectonic plates of global politics, the economist provides comprehensive coverage that goes beyond the headlines. What sets the economists apart is their ability to make complex issues accessible and engaging, much like we strive to do in this podcast. If you're passionate about expanding your knowledge and gaining a deeper understanding of the forces that shape our world, then I highly recommend
Starting point is 00:20:39 subscribing to the economist. It's an investment into intellectual growth, one that you won't regret. As a listener of Toe, you get a special 20% off discount. Now you can enjoy The Economist and all it has to offer for less. Head over to their website, www.com slash Toe, T-O-E to get started. Thanks for tuning in. And now back to our explorations of the mysteries of the universe. Would this statement that G of F equals the identity on V be different than the statement G of F is isomorphic to the identity on V? Great question. So the sort of category that we're discussing right now,
Starting point is 00:21:22 later I will refer to as a one category, meaning a one-dimensional category. So a one-dimensional category has objects, which you can think of as being kind of like points. Those sort of have dimension zero. And the way you depict an arrow, you think of that as being sort of a one-dimensional transformation. So that's sort of what the one refers to. There's one dimensions worth of arrows. And the way the
Starting point is 00:21:48 definition of a one-dimensional category works is parallel arrows. So if I had two different arrows going from V to W are either equal or they're not equal. There's sort of nothing in between. So when we're talking about linear maps from V to W, either they're the same linear map or they're different linear maps, there's sort of no more subtle relationship that you could have between linear maps other than equal or not equal, the sort of binary of being equal or unequal. So that, yeah, so equal means equal. There's no way to weaken it to something that's less than that. Got it.
Starting point is 00:22:30 Now, if this were a higher dimensional category, that's exactly the possibility that we would have. So there are other mathematical settings. Vector spaces are not them, but there are other mathematical settings that will introduce later on, where it would make sense to relax this equality and replace it with something else. Okay. Yeah. So that's a great preview, but right now equal is just equal. And actually, I guess I should mention in higher categorical context,
Starting point is 00:23:02 you might have heard the term equivalence used instead of isomorphism. that's actually the difference between an equivalence and an isomorphism. Equivalence typically refers to a setting where we have two objects and we have arrows in each direction. And the composites are not equal to identities, but maybe isomorphic or equivalent to identities as determined by the sort of higher categorical setting. So, great. So the second ingredient in the proof I just gave is something called the Uneda-Lama, and I'll discuss it in general. So if I have objects A and B in a category. They are isomorphic if and only if. For all objects X, so all other objects in the category, the set of arrows, and here I'm going to introduce notation for the set of arrows.
Starting point is 00:23:51 I'll write HOM A-X for the set of arrows in whatever ambient category from A to X, and HOMBX for the set of arrows from B to X. HOMB, but is short for homomorphism, which is what the arrows are in many sort of categories of interest to algebra. So the statement says that objects A and B are isomorphic in the sense we just defined, if and only if for all other objects, these sets of aeros are isomorphic, you know, as sets. In other words, there's a bijection. And moreover, these isomorphisms are natural in the sense of commuting with composition with any arrow from X to Y. So I haven't written down what naturality means, but I'll sort of briefly describe it. What you should think of as a natural isomorphism really gives a family of isomorphisms.
Starting point is 00:24:41 So for each object X, we have an explicit bijection between arrows from A to X and arrows from B to X. So in particular, for a different object Y, I would have another bijection for arrows from A to Y and arrows from B to Y. These byjections are in particular function. So it's a process of taking an arrow from A to X and converting it into an arrow from B to X. And you might ask what happens if you first employ this process and then complete. with an arrow H from X to Y, or first compose with the arrow H from X to Y, and then employ this natural bijection. And the naturality condition says that those two operations have to agree, so that commutes. And this is made precise in the language of a natural transformation,
Starting point is 00:25:28 which I won't describe here. But it's a key ingredient in the proof of the innate alama. So I'm going to give a part of the proof and stop before the last step, which I'll leave as an exercise. And the reason is the innate dilemma, this is one of these things that you have to see this several times before you actually understand it. So if this is your nth time seeing this, then you can perhaps complete the exercise and feel like you've really internalized this result. If this is your first time seeing it, then what I'm encouraging you to do is go away afterwards, look up the definition of natural and try to complete the proof. Okay. So the statement here is,
Starting point is 00:26:04 actually if and only if. If my objects are isomorphic, then I get this natural bijection. If I have this natural bijection, then I have an isomorphism between the objects and the category. We're only going to use one direction of this result, so I'm only going to discuss one direction of the proof. So the goal is to construct an isomorphism between A and B. What we have is this natural bijection, how do I get an isomorphism from A to B? So what I'm assuming is that I have this natural bijection between arrows from A to X and arrows from B to X for any X. And in particular, I can consider two special cases where X is A and X is equal to B. And so now I have a bijection between the arrows from A to A and the arrows from B to A. And then I have a separate bijection between the arrows from
Starting point is 00:26:50 A to B and the arrows from B to B. And in the set of arrows from A to A, I have a special arrow called the identity arrow. And similarly, in the set of arrows from B to B, I have a special arrow called the identity arrow. And when I pass those arrows through these bijections, now I get some arrow from B to A and also some arrow from A to B. And so this is how I'm producing the data of the isomorphism. That involves an arrow from B to A and an arrow from A to B. Now, there's a condition that I still need to check, namely that the composites are equal to the identities. And that's where this naturality gets used. So I'm not going to give that detail right now, but I'll leave that final part as an exercise to look up the precise definition of naturality and use it to check that these composites
Starting point is 00:27:37 are the identities. But this is, again, the sort of central idea of the innate dilemma. Great. And I'll place a definition of naturality on screen now for those who want to pause and prove for themselves. Awesome. Great. Okay. So we're aiming to use the innate dilemma to prove a theorem about an isomorphism in a general category involving co-products and left adjoins. So I need to explain what that is. So two objects in a category admit a co-product if there is another object and maps as displayed. So here, A and B are my original objects in a category. There's a new object that I'm denoting sort of like a disjoint union B or with a sort of square,
Starting point is 00:28:25 union symbol. So there's a new object here. And then there's a map from A into the co-product and a map from B into the co-product specified arrows. So map and arrow are synonyms that get used in category theory. Also morphism. I'm going to try to stick with arrow, but I might use the synonyms. And then the defining property that makes this object together with these two maps into the co-product is the property that any other diagram of this shape factors uniquely through the co-product. And what that means is if I had some other object X and some other arrows, F from A to X and G from B to X, then there exists a unique arrow H from the co-product into X so that this diagram commutes. And what that means is that if I take H and I compose with Yoda A,
Starting point is 00:29:18 that recovers the map F. And if I take H and I compose with Yoda B, that recovers the map G. So in other words, there's a relationship between the pair of maps, F and G, and the single map H that I can describe as follows. So any arrow H out of the co-product and into an arbitrary object X is uniquely determined by pairing together two different arrows, one from A to X and one from B to X. And actually, I can be even more precise about that. I can say that this isomorphism is defined by composing with the maps, Yodagh, A and Yoda B, sort of the inclusions into the co-product. So if I take a map out of the co-product and I restrict along these two pieces, then I have a map from A to X and a map from B to X, and this operation is a bijection or as an isomorphism.
Starting point is 00:30:12 So I can decompose maps out of the co-product into two different pieces. I can sort of argue by cases or argue by pairing two arrows together. Okay, so that's the defining property of a co-product. And we've seen an example. So for vector spaces, the co-product is given by this operation called the direct sum. So if I have vector spaces v and W, their direct sum is a new vector space. There's sort of a canonical linear map from V into the direct sum. It sort of includes it as a sum and or a subspace of the direct sum. Similarly for W, So that defines the maps Yoda sub v, Yoda sub W. And then this construction has this special property that linear maps out of the direct sum and into an arbitrary vector space X, decompose uniquely into two pieces, linear maps from V to X and linear maps from W to X.
Starting point is 00:31:10 So that's an example of the co-product. But many different categories have co-products. That's the point of having this sort of general notion. When I'm wrestling with a guest's argument about, say, the hard problem with consciousness or quantum foundations, I refuse to let even a scintilla of confusion remain unexamined. Claude is my thinking partner here. Actually, they just released something major, which is Claude Opus 4.6, a state-of-the-art model. Claude is the AI for minds that don't stop at good enough.
Starting point is 00:31:42 It's the collaborator that actually understands your entire workflow thinks with you, not for you, Whether you're debugging code at midnight or strategizing your next business move, Claude extends your thinking to tackle problems that matter to you. I use Claude, actually live right here during this interview with Eva Miranda. That's actually a feature called artifacts, and none of the other LLM providers have something that even comes close to rivaling it. Claude handles interalia technical philosophy, mathematical rigor, and deep research synthesis, all without producing slovenly reasoning.
Starting point is 00:32:16 The responses are decorous, precise, well-structured, never sycophantic, unlike some other models. And it doesn't just hand me the answers. The way that I prompted it is that it helps me think through problems. Ready to tackle bigger problems? Get started with Claude today at clod.a.ai slash theories of everything. That's clod.a.i slash theories of everything and check out Claude Pro, which includes access to all of the features mentioned in today's episode.
Starting point is 00:32:45 A quick terminological question. Sure. So much of the time in category theory, there are arrows written on the board, and then someone will say that this diagram commutes. Now, is there a special symbol for that term such that this commutes, or is it always implied whenever the arrows are written down on the board, or do you always have to say it? Like, what is the standard? Yeah, great question.
Starting point is 00:33:08 So in the setting of sort of ordinary one category theory, it's usually conventional to assume that any kind of diagram that you drew commutes. So here I've drawn these sort of triangles in the plane, these sort of directed triangles in the plane, and implicitly in depicting them in that way, I'm assuming commutativity, meaning that any path formed by composable arrows, any two paths of composable arrows
Starting point is 00:33:38 with the same source and the same target agree. So that's sort of a default convention in category theory. Okay. Very occasionally, some symbols get used to denote that, but it's more common to make up a symbol or, you know, put a big warning in your notation if you're ever not following that convention. If you've drawn a diagram that does not commute, that's definitely something you need to point out because people will assume it does commute. And the reason people will assume it does commute is commutative diagrams are sort of very useful and they're sort of appear all over papers in certain areas of mathematics. and, you know, particularly in, you know, algebraic topology, settings like this, you know, it was a big advance in the field when the technology of commutative diagrams first appeared.
Starting point is 00:34:29 There was a, I should look this up, there's a nice quote by Steenrod describing who was a co-author of a famous algebraic topology textbook that came out in like the 40s or 50s or so right, 1940s or 1950s, so shortly after the birth of category theory, about how much information is found in commutative diagrams. It's a very, right, it's a very useful way of organizing arguments in a particular style. I mean, there's a fun movie illustration of this in the film, It's My Turn, where the character, who's a math professor, you know, proves the snake lemma, which is something from homological algebra via diagram chase.
Starting point is 00:35:20 You know, it's a way to very compactly describe a lot of information. So the default convention is that diagrams commute, and if they don't, I'll have to warn you, though. Cool. Okay. Great. So the final abstract ingredient, and then we'll revisit the proof, is concerns functors and particularly adjoint functors. So firstly, what is a functor? It's just an arrow between categories. So here, C and D are entire categories among themselves, so like the category of vector spaces and linear maps. So C and D each have objects and arrows with sources and targets and identities and composition. And what a functor is, is it's really an arrow between categories. Explicitly it sends the objects of C to objects of D and the arrows in C to arrows and D in such a way that responds.
Starting point is 00:36:11 all of the categorical structure. So the sources, the targets, the identities, the composition. And if I have a pair of functors, one of which goes from C to D, and the other of which goes from D to C, then this pair of functor's define what's called an adjunction. If there is one of these natural bijections between arrows of the following form, so for any object, C in the category C, and for any object D in the category D, the requirement is that arrows in the category D from F of C to D correspond to arrows in the category C from C to U of D in a way that's sort of natural in both the C variable and the D variable. Again, that's the thing that I'm not defining. So in terms of the hom sets, these sets of arrows, there's an isomorphism between the
Starting point is 00:37:09 the set of arrows in D from F of C to D, and the set of arrows in C from C to U of D for any object C and any object D. This F is then called the left adjoint of the adjunction, and the U is called the right adjoint. So each of these functors is an adjoint, the left because it's sort of on the left when I write it in the way that I did, write because it's on the right. And the example that we've seen of this involves the tensor product construction. So if I fix a vector space U, then there's a functor from the category of vector spaces to itself defined by tensoring with the vector space U. So it sends a vector space V to the vector space U tensor v. So the sort of operation of tensoring with a vector space is an example of one of these functors.
Starting point is 00:38:02 The other operation, which is forming the vector space of linear maps out of you, is another functor, and those two functors are adjoint. And what that means in practice is that if I have a linear map from U-tensor v into W, then that corresponds in a natural way to a linear map from V to the vector space of linear maps from U to W. This is the operation that I called curing and uncurrying before. There's sort of something similar that happens in the category of sets, where it's often called curing and uncurine. So the correspondence between the linear maps or the arrows of these two different forms is what's called an adjunction. So this evokes feelings of being an inverse. Now is the reason it's called adjunction and not
Starting point is 00:38:50 inverse function or inverse functor. Yeah, I mean, the name adjunction is kind of funny. There's a notion called an adjoint operator that actually also comes out of linear algebra and has nothing to do with this. But there's some sort of like typographical similarity to the way these are denoted. So, you know, the adjoint of a linear transformation has some property with the inner product that looks sort of formally similar to this. But, so I think that's the etymology of the term. But, yeah, it's kind of a funny name. But yeah, it's not an inverse. So F and you are not inverses, but they do have some sort of, I mean, they express some sort of duality in a sense. Yes.
Starting point is 00:39:35 Though that metaphor is also a little bit stretched, it kind of makes more sense for adjunctions involving an opposite category, which is not the setting that we're in here. Okay, so let's, so it's a lot of abstract terminology, but the point of it is these are all the ingredients that went into the proof of the theorem that I mentioned, that for vector spaces U, U, v and W, U-tensor v-derexum W is the same isomorphic to. the vector space U-tensor v direct some U-tensor W. And for giving the exact same proof but expressed in more abstract language, what will prove in fact is this general theorem, that left adjoints preserve co-products. So if F is a left adjoint, and I have a co-product in the domain category, then I get an isomorphism between the object that you get by first forming the co-product
Starting point is 00:40:31 and then applying the left adjoint, or first apply. the left adjoint and then forming the co-product. Those objects are isomorphic. You can prove that in complete generality. This will specialize to the result about vector spaces, but also the other results that I mentioned about, you know, inverse and direct image involving power sets and the construction of the free group. And many, many, many, many are other things besides. And so let's just see the proof again. So the first step was to invoke the innate alama to say that to show that I have an isomorphism between two objects in the category D, so that F of the co-product of A and B is isomorphic to the co-product of F and A and F and B. What it suffices to do instead is given natural isomorphism
Starting point is 00:41:16 between the sets of functions, the sets of arrows out of these objects into an arbitrary object X. So that's what I need to do. Then by the innate alum, I will get an isomorphism between F of the co-product and the co-product of F of A and F of B. And how do I get that isomorphism Well, I'm just going to compose a bunch of natural isomorphism. So if I start with an arrow from F of the co-product of A and B into X, then I can use the sort of natural bijection of the adjunction to transpose that into an arrow now in the category C out of the co-product of A and B and into the object U of X, U being the right adjoint to F. But now I have a map in the category C out of a co-product. And the sort of defining universal property of a co-product says I can split that up into
Starting point is 00:42:09 two arrows, one from A into U of X and one from B into U of X. And now I can use my adjunction again and transpose those two components back into arrows in the category D. Now from F of A into X and from F of B into X. And then I can again use the universal property of the co-product and pair them back together to get a map out of the co-product. But this time it's the co-product of the product of F of A and F of B into X. And composing those byjections, I get the natural isomorphism that I'm looking for. And that's the proof. I know that's a lot. I mean, you know, there's sort of a question about like what is the right time to learn category theory. And it's sort of after you've seen arguments in different settings, maybe in a topology course and then again in
Starting point is 00:43:02 an abstract algebra course and maybe then again in a complex analysis course that are starting to look kind of formally similar. And you're wondering, can these all be unified? And I think at that stage of, you know, sort of a mathematical education, these kind of more abstract points of view can actually make complicated mathematics easier to understand. Now, when you're hearing these terms for the first time, it makes it more complicated to understand because we're sort of moving a layer further away from your experience, either in the real world or in, you know, sort of other mathematics courses. But, you know, once you get used to it, this perspective, I think, can really be clarifying. And it can also make complicated mathematics easier to
Starting point is 00:43:56 communicate. So it's easier to understand and easier to communicate. How so? Why is that? So I think what abstractions from category theory do is they're used to clarify arguments. So removing sort of inessential details, they're used to generalize constructions and proofs to other settings. So, you know, there's lots and lots of left adjoins and lots and lots of co-products. And, you know, before encountering these terms, you might not expect that they would have that relationship that the left adjoint preserves the co-products. But now that it's described more generally, it's easy to sort of look for examples in other settings. And then the final ideas is a little
Starting point is 00:44:45 bit less precise, a little bit more subtle, but I feel like it's very true to my experience. It allows you to upload more complicated mathematical ideas into one's working memory. So having a concise name for something can sort of make it easier to think about. You, um, You know, what these terms are doing is they're collecting together sort of useful lists of axioms and, you know, putting into a package and once it becomes familiar, it's easier to have that just be a small part of the mathematical idea that you're trying to puzzle about and leaves you space to, you know,
Starting point is 00:45:22 sort of work with more and more complicated things. And I think this sort of last point really explains kind of the proliferation of category theory in mathematics. So it's certainly not the case that every, you know, sort of active research area in, you know, recent history. So here I've said sort of 20th century mathematics, I'm putting in a little bit of a distance, is inflected by category theory, but a lot of them are. And in particular, sort of algebraic geometry, algebraic topology, representation theory, you know, certain areas of mathematical physics. You know, that I'm thinking, the ones that are more maybe kind of topological and less analytic. These are all expressed in the
Starting point is 00:46:07 language of category theory. And what I mean by that is sort of the basic objects that mathematicians are studying in those areas can't even really be defined without using categorical language. So, you know, algebraic topologists make use of spectral sequences and simplical sets and co-emology theories. You know, sort of a fundamental object in modern algebraic geometry is something called the scheme in physics. Atia introduced the idea of a topological quantum field theory. And to even say what these things are requires a lot of categorical language. Without the categorical language, I think it would be a lot harder to fit these concepts
Starting point is 00:46:46 into one's head. So then you could start to make conjectures and prove theorems about them. Okay. So that was meant to be a quick crash course in category theory. Great. Okay. So this talk's supposed to be about infinity category theory. So now that we're all sort of totally comfortable with ordinary category theory, let's try and see what infinity categories are a little bit on for and about. So first I want to introduce kind of a nice perspective on ordinary categories. This is a quote from Barry Mazur and an essay, when is one thing equal to some other thing? It's a, in honor of Saunders McLean, who is one of the founders of category theory. And he says that a category frames a possible template for a mathematical theory. The theory should have the nouns and the verbs, i.e. the objects and the morphisms,
Starting point is 00:47:41 morphisms are the arrows I was talking about before. There should be an explicit notion of composition related to the morphisms. So the idea is that if you're really going to, I mean, this is a insight that I is attributed often to Emmy Nerther in the setting of group theory, you know, to really study group theory, we're interested not just in the objects, these sort of abstract groups that came up in Galois's solution to the problem about solving polynomial equations, but also the arrows between groups, which are called group homomorphisms. That is really needed to add sort of verbs, add sort of motion to the theory of groups. So that's sort of this 20th century
Starting point is 00:48:21 observation that sort of motivates category theory. So what's an infinity category? So one way to think about it is it frames a template for mathematical theory that now has nouns and verbs and adjectives and adverbs and pronouns and prepositions and interjections and so on. So you might need a richer grammar to describe a theory of more complicated mathematical objects. So not things like vector spaces that really live pretty happily in an ordinary one-dimensional category, but require some sort of higher dimensional category. And so what do I mean by that? A quick question. How seriously is someone supposed to take this analogy here? Like, how are we supposed to know that in order for us to categorize interjections that we don't need a three category and not a five category?
Starting point is 00:49:11 Right. This is not meant to be serious at all. I mean, this is meant to be playful. But, you know, a way to, you know, I think these are like all of the parts of speech that exist in English. at least, and this suggests that a finite dimensional category would suffice. And so, no, this is really just a loose analogy. But let me try and give some more precise understanding of what I mean by an infinity category. So like an ordinary category, it has objects, which can think of as being a point somehow and one-dimensional morphisms between them. Those are the arrows that go between objects with source and target as before. But now we might think of composition as being, a little bit more complicated. Maybe it's not an operation, so there's not a unique composite of a pair
Starting point is 00:49:59 of composable arrows, but there is some composite, and maybe the composition is sort of witnessed in some sense by invertible two-dimensional morphisms. And then associativity becomes a bit more complicated. Maybe that's witnessed by some sort of three-dimensional data, some invertible three-dimensional morphisms. And now you can't really stop until, thinking about composites of three things, associativity involving four composable morphisms would be witnessed by some four-dimensional data that involves these invertible three-dimensional morphisms, and we have coherences up to invertible morphisms all the way up. So we're adding higher dimensional morphisms on top of the objects and the one-dimensional morphisms,
Starting point is 00:50:41 and also relaxing some of the axioms. So we're no longer requiring that associativity is an equality, that H-composed-G, composed F, is equal to H-composed, G-composed F. But maybe there's some sort of data involved, some sort of higher-dimensional data. That's what we should imagine. So that's a very vague picture about what an infinity category is.
Starting point is 00:51:06 And let me give an example. So... Editor's note here, the term witness is going to come up repeatedly in this presentation, so you should just know that it means that something is explicitly realized by something else. So, for instance,
Starting point is 00:51:20 if I had the set, one, two, three, and I say there's an even number in it, the number two witnesses this claim, because it's a concrete example demonstrating the truth of that statement. So, firstly, I'm going to revisit a sort of 20th century construction. This is something called the fundamental groupoid of a topological space. If you've heard about the fundamental group, it's a very similar thing. So what is it? So firstly, the fundamental groupoid of a topological space is a category in this sort of ordinary one-dimensional,
Starting point is 00:51:51 sense, whose objects are the points in the space, and arrows are paths in the space up to base tomatope. So what I have here is this beautiful cartoon picture of the fundamental group-roid of a circle. So here, the little circle represents a point. You can imagine having sort of a point on the circle. And, you know, so that's an object in this category. And then amorphism in this category, you might think in particular of a loop, so a path that you would trace through the circle that goes from the point back to itself. And this sort of up to homotopy means if you sort of, you know, really what matters about these paths is how many times they wind around the circle and in what direction. if you sort of change the parameterization a little bit or, you know, reverse slightly, but then continue on in the same direction, those things are sort of homotopic to, it could be sort of smoothed
Starting point is 00:52:56 away by a homotopy. But anyway, those are the sort of objects and paths, and that defines an ordinary category. But there's a problem with it. So it's relatively easy to define. I mean, this is something that, you know, I'll mention in an undergraduate point set topology course. But it doesn't see any sort of higher dimensional structure in the space. So in the picture, on the right, what we're looking at is the fundamental group rate of a disk. And here we have two different objects, the sort of point on the right and the point on the left. And in fact, any path you can take through the disc from the left-hand point to the right-hand one is homotopic to any other path from the left-hand one to the right-hand one. So there's only one arrow. I mean, even though there's lots and lots of different paths, the arrows in the the fundamental group right are sort of paths up to the space. Homotopy, there's really only one arrow up to homotopy. Homotopy is sort of a continuous deformation between paths. You can take from the left to the right. And what that means is that it doesn't see sort of higher dimensional structure. So maybe this disk you're thinking of as sort of a flat two-dimensional disc in the plane,
Starting point is 00:54:04 but if it were a three-dimensional disk or four-dimensional disc or five-dimensional disc, it would be exactly the same. There would still be only one path up to homotopy between any two, points. And what that means is that the fundamental groupoid is trivial. So in particular, the fundamental grouproid of all the spheres above dimension one are trivial. So this is somehow, it's some categorical information captured in a topological space, but it only sees the low dimensional structure in the space. That's the problem with the fundamental groupoid. And the way to resolve this problem is to recognize that the fundamental group board doesn't want to be a one category. It wants to be one of these infinite dimensional categories. So the full homotopy type of a topological
Starting point is 00:54:52 space, so not just the sort of path components and the sort of one-dimensional structure, the sort of loops in the space, is captured by something called the fundamental infinity groupoid. This is something, so it's an infinite dimensional category. Like an ordinary category, it has objects, which are, again, the points in the space, and morphisms. So now I'm going to call them one morphisms, which are the paths in the space, and I'm no longer taking homotopies. So each, if two paths are different, they get counted as two different morphisms. So we're sort of solving the problem we had before where too many paths get identified. And the homotopies now are part of the higher dimensional data. So the two dimensional morphisms between the one dimensional morphisms
Starting point is 00:55:36 are these homotopies between paths. So a homotopy is a path between paths. And we're adding that as data, sort of higher dimensional arrow to this category. And in fact, we have sort of homotopies between homotopies. These are three-dimensional morphisms. You could think about these as continuous maps from increasingly higher dimensional cubes into the space. And we continue all the way up. So we have four morphisms and five morphisms in all countably, infinitely many dimensions. That's what the infinity refers to as this sort of countable infinity. And so this is the fundamental infinity group void. So infinity groupoid, that's an example of an infinity category. Whenever I have an infinity category, I can take a quotient and recover an ordinary one category, and that recovers the
Starting point is 00:56:24 fundamental group board that we just defined. So the, you know, the category version of the fundamental group void is a quotient of the infinity category version of the fundamental infinity group void. So we have the same information as before, but we've added all these higher layers. And there's a sort of theorem referred to by the name Grotendiehys homotopy hypothesis that says that this infinity groupoid really captures all of the sort of homotopical data of a topological space. So you might know that a topological space has all of these invariants, which are called its homotopy groups. There's sort of zero-dimensional homotopy, one-dimensional homotopy two-dimensional homotopy, all the dimensions. And the sort of information of a space that is
Starting point is 00:57:17 captured by its homotopy groups is sometimes called its homotopy type. And this fundamental infinity groupoid defines an equivalence between spaces sort of up to homotopy type and infinity groupboids up to equivalence. So it's precisely capturing the full homotopy type of a topological space. Has this been proved now? Why is it called a hypothesis? Yes. So it's, the reason it's called a hypothesis is that Growth and Deak stated this before there was an accepted definition of an infinity group void. And, you know, one perspective on the homotopy hypothesis is that via this vision, you could define an infinity groupoid just to be a homotopy type or, you know, a topological space of, you know, a topological space. up to weak homotopia equivalence. But alternatively, you can give sort of concrete definitions of an infinity group void, and then for your definition to be sort of accepted, to be taken seriously, you would have to prove the homotopia hypothesis for that model. And this has been done sort of
Starting point is 00:58:28 several times over. So, yeah. So it's both like a theorem and a guiding principle that explains precisely what the relationship should be between infinity group voids and spaces. You know, spaces are sort of older mathematically, this idea of a homotopy type was understood before the idea of an infinity groupoid came up on the scene. So Groton Dik is explaining the vision for homotopy group voids, or sorry, infinity group voids. Interesting. Great. And actually there's a, there's a new term on the scene that I wanted to mention briefly. So, so Shulsa has, Peter Shulsa's, rechristened infinity groupboids as anima. And the idea of it is this is somehow like the soul of the space. So it's not capturing all the points of the space, but it is capturing sort of
Starting point is 00:59:15 path components and the sort of loops in the space and the spheres in the space and so on and so forth. When I'm wrestling with a guest's argument about, say, the hard problem of consciousness or quantum foundations, I refuse to let even a scintilla of confusion remain unexamined. Claude is my thinking partner here. Actually, they just released something major, which is Claude Opus 4.6, a state-of-the-art model. Claude is the AI for minds that don't stop at good enough. It's the collaborator that actually understands your entire workflow thinks with you, not for you.
Starting point is 00:59:53 Whether you're debugging code at midnight or strategizing your next business move, Claude extends your thinking to tackle problems that matter to you. I use Claude, actually live right here during this interview with Eva Miranda. That's actually a feature called Artifacts, and none of the other LLM providers have something that even comes close to rivaling it. Claude handles, interalia, technical philosophy, mathematical rigor, and deep research synthesis, all without producing slovenly reasoning. The responses are decorous, precise, well-structured, never sycophantic, unlike some other models,
Starting point is 01:00:27 and it doesn't just hand me the answers. The way that I prompted it is that it helps me think through problems. Ready to tackle larger problems? Sign up for Claude today and get 50% sent off Claude Pro when you use my link, clod.a.I. slash theories of everything, all one word. Okay. So just a quick moment.
Starting point is 01:00:49 So for the audience, they're familiar with R3, which is contractable, I know. But either way, is it correct to say the infinity group of a space? Like, is it correct to say the infinity group of R3? Is that the way to phrase it? Yes. The yes, that's totally correct. And you gave me an easy example, which I'm grateful for, because R3 as a space is contractable, meaning that as a homotopy type or its infinity groupoid is the terminal infinity groupoid.
Starting point is 01:01:20 So it has one object and only identity morphisms. So homotopy equivalents or weak homotopy equivalents, it's kind of a funny invariant in that it doesn't see dimensions somehow. So all RN, all of those Euclidean spaces are contractable spaces. But if we replaced the sort of n-dimensional Euclidean space by the n-dimensional sphere, now all those homotopy types are distinct. So the one sphere or the circle is different from the two sphere, which is probably what you're thinking of when I say sphere or the three sphere, the four sphere, the five sphere. All of those are different. And in fact, the homotopy types of the spheres are very complicated and to some extent an open problem. to, you know, precisely understand those homotopy types.
Starting point is 01:02:07 Does bot periodicity mean that it's much simpler at some point to understand the infinity groupoid of a sphere? Or is this still some open problem? Still an open problem. So there's, you know, the problem of understanding the homotopy type can sort of be separated into the stable structure and the unstable structure. And the stable homotopy group of spheres is understood. That's sort of where periodicity phenomenon comes in, but then there's sort of unstable structure, and this is definitely an active research area.
Starting point is 01:02:43 I see. So can you go from knowing the infinity groupoid structure of a space to then reconstructing the space, or does it only go in one direction? Yes, yeah, definitely. So I guess, I'm imagining a sort of construction where the infinity grouproid is maybe, you know, finite in some sense or finitely presented, meaning you have kind of generating morphisms in each direction. And then you're building the space like you would build a CW complex. So you start with some collection of points and then you attach some paths and then you attach some two-dimensional disks and some three-dimensional disks and so on and so forth. So CW complexes are a way to build explicit homotopy types, and that can equally be understood as a construction of an explicit infinity groupoid.
Starting point is 01:03:43 You know, those aren't the sort of only examples or the only ways of constructing them, but yeah. Okay, cool. Please carry on. Okay. So one of the ideas that we just saw in thinking about the fun, fundamental infinity group void and the fundamental groupoid is this relationship between an infinity category and its quotient one category. So the fundamental infinity groupoid of a topological space, it's an infinity category who has objects are the points, the one morphisms are the paths, the two morphisms are the homotopies, three morphisms are homotopies between homotopies and so on. And this relationship appears in other settings as well. So there's something in sort of algebra
Starting point is 01:04:26 Greek geometry, homological algebra called the derived category of a ring or of a billion category. And that is now understood as the homotopy category of an infinity category, namely the infinity category of chain complexes. And similarly, I mentioned briefly Atea's TQFTs. Those are indexed by a category of, say, closed end manifolds and diphomorphism classes of cobordisms. And that is again a homotopy category of an infinity category of closed endmorphisms and cabordisms. And here I've removed the diffiomorphism classes, and I'm including that sort of thing as the data of the higher morphisms. So this is sort of a pattern in mathematics is that certain kind of complicated, frankly, poorly behaved ordinary categories are now understood as the quotient homotopy categories
Starting point is 01:05:16 of some infinite dimensional category. And here maybe I'll acknowledge that I'm using this term infinity category kind of following Jacob Lurie as a nickname for a special case of what's maybe called an infinity N category. This is the N equals one special case. And so in the purpose of this talk, I want you to think of an infinity category as something that has objects and morphisms and then two morphisms and three morphisms and so on. But all of these higher morphisms are something like homotopies and they're invertible, or at least weekly invertible. And in that sense, the infinity categories that we're discussing, these infinity one categories are really closely analogous to ordinary one categories,
Starting point is 01:06:00 like we mentioned at the beginning. So there are also infinity two categories and infinity categories and even infinity categories where we have higher dimensions worth of noninvertible morphisms. Those are considerably more complicated, and I don't want to discuss them at all today because I'm describing an analogy between infinite dimensional category theory and ordinary category theory that is tightest in the case n equals 1. So I know you don't want to discuss it today. Sure. But is there such a thing as an alf1 cardinality of infinity category? So, you know, whenever you introduce a new mathematical definition, there's always the question of what are the
Starting point is 01:06:46 examples. And the reason that infinity n categories are interesting is because they're examples. So, you know, really this, you know, category of cobordisms is sort of in the modern perspective on topological quantum field theory understands that as an example of an infinity N category, where N is giving the dimension of the closed manifolds. So that notion exists, and there's a theory of them because there are mathematical examples. You know, I am not aware of any sort of non-artificial examples of infinite dimensional categories involving different ordinals beyond, you know, sort of the countable infinity. Hmm, okay, cool.
Starting point is 01:07:33 Or maybe one successor above that or something, but, you know, that's where I'd prefer to stop. So I tried to explain in the first part of this talk what ordinary category theory is for, and I want to explain a little bit about what infinity category theory. theory is for, it's pretty complicated, unfortunately, because examples of infinity categories are pretty complicated, and this is the simplest story that I can think to tell. And I'll build up to it. So there's a powerful idea in mathematics, which is to turn geometry into algebra. So there's some sort of geometric or topological question that is answered by constructing, you know, some sort of invariance that take the form of, you know, vector spaces or groups or something.
Starting point is 01:08:16 like that. And, you know, maybe this sort of geometric data gets converted into something called a chain complex. So a chain complex is given by an abealian group in dimension zero and an abelian group in dimension one and an abelian group in dimension two. I guess I said co-chain complex here. There's chain complexes equally. And then there's some maps between them. These are the differentials. And there's the property that if you compose any two of these maps, you get zero. So there's a geometric interpretation of all of this that I don't want to get into right now. But in any case, the sort of fundamental geometric information was captured, I mean, not so much in the chain complex, but in the co-amology groups. So this is a sequence of abealian groups. There's the zero-th co-homology, the first cohomology, the second cohomology. It's a bunch of groups, and somehow it's capturing some geometric information about the space in, say, dimension zero and one and two and so forth. And all of this motivates the definition of the derived category. So this is one of these ordinary one categories. that I mentioned previously.
Starting point is 01:09:18 In this case, the objects are these co-chain complexes, like we're illustrating here. And the arrows are maps of co-chain complexes. You can express what that is in a commutative diagram, except when a map is a quasi-isomorphism, meaning it induces an isomorphism on cohomology groups for all N, then that map is regarded as an isomorphism, whether or not it's actually an isomorphism
Starting point is 01:09:44 in the category of chain complexes. And there's lots of examples of quasi-isomorphisms that are not, in fact, isomorphisms. So this is some sort of localization process where we're formally inverting some maps that are not maps. So this is a thing that's called the derived category and try to indicate why it's important, but it's not very well behaved as a category. So there's important constructions that you might want to do on the derived category. So, for instance, taking something called the mapping cone of an arrow. So you start with one of these maps between chain complexes and then you form something called the mapping cone. That's going to be a chain complex itself. And, you know, a construction between categories really wants to be a functor. But this is just not a functor at the level of the derived category. And the reason that it's not a functor is because we've forgotten too much information when forming the derived category. And the reason that it's not a functor is because we've forgotten too much information when forming the derived. category. And we've forgotten that information because we were trying to form a one category and not just letting this thing be the thing that it really wants to be, which is an infinity category.
Starting point is 01:10:54 So sort of more modern perspective on the derived category is to think about it as, you know, this quotient of an infinity category whose objects are the co-chain complexes, whose arrows are the maps of co-chain complexes, and then just add in these sort of higher structure. that allows one to understand a map that's a quasi-isomorphism as being invertible, maybe not in the sort of original strict sense of being an isomorphism in a one category, but in sort of a weaker sense that's appropriate to an infinity category. And in this setting, this mapping cone construction definitely is functorial, and in fact, it's very easy to describe using a co-limit,
Starting point is 01:11:35 so sort of something like this co-product that we were discussing previously. I mean, it's a push-out. It's not a co-product. So there's a lot here that I've described sort of only impressionistically, but this is kind of the simplest example I know of a sort of concrete problem with sort of one categorical structure that's resolved by passing to the infinity categorical structure. And sorry, what are you quotienting out? Right. totally fine one category whose objects are co-chain complexes and whose arrows are sort of commutative diagrams between co-chain complexes. So there's an ordinary category of chain
Starting point is 01:12:27 complexes. Sort of the problem with it is on the one hand that doesn't have kind of the right notion of sameness. So the quasi-isomorphisms are not isomorphisms in that category. Yes. And another problem is that there's higher dimensional structure that's not reflected in there. So chain complexes have a, or maps of chain complexes have a notion of homotopy between them, and you could add that in as sort of higher dimensional data to this one category. And then we have higher homotopy, so you can add in those higher dimensional data as well. So this is an example where the one category sort of really wants to be an infinity category. And then you could quotient to weigh all this higher dimensional data in a particular way, and that gives you the
Starting point is 01:13:11 derived category. I see. So this was all meant to be a motivation about one-adfinity categories are four. I haven't really, sorry, I have a couple other perspectives. So in some areas of mathematics, it's kind of increasingly natural to sort of view everything as being sort of up to homotopy. Here's a quote from Yuri Menin in an interview by Michael Gelfand. He says, I'm pretty strongly convinced there is an ongoing reversal in the collective consciousness of mathematicians. I guess I should say some mathematicians.
Starting point is 01:13:54 The homotopical picture of the world becomes the basic intuition. If you want to get a discrete set, then you pass to the set of connected components of a space defined only up to homotopy. Cantor's problems of the infinite recede to the background from the very start. our images are so infinite that if you want to make something finite out of them, you must divide by another infinity. So I feel like this is sort of what it feels like to be working in this world where your categories are now somehow so much bigger because they have all of these higher dimensional morphisms on top of maybe what was already a pretty large category. And does your unconscious or, well, it's usually a collective unconscious, but does your consciousness reside in that collective one? Hi everyone, hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my substack.
Starting point is 01:14:47 Subscribers get first access to new episodes, new posts as well, behind-the-scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers. By joining, you'll directly be supporting my work and helping keep these conversations at the cutting edge. So click the link on screen here. Hit subscribe and let's keep pushing the boundaries of knowledge together. Thank you and enjoy the show. Just so you know, if you're listening, it's c-U-R-T, J-A-I-M-U-N-G-A-L.org. Kurtzimungle.org. Does your consciousness reside in that collective one?
Starting point is 01:15:20 Yes, I think increasingly. You know, I feel, sometimes I feel like I'm pretty far out in space. Right. So I haven't explained at all. what infinity categories are. And so this is all meant to be sort of perspectives on like what they're for, but what are they, in fact. So let's try to address that question. So essentially infinity categories, remember I'm using this term to mean infinity one categories. These are like one categories where all the sets that are used in the definition of a one category have been
Starting point is 01:15:54 replaced by infinity group boards. In other words, homotopy types, so that sort of spaces, but up to the information of their homotopy groups, also known as anima. So there's some analogy between sets and infinity grouproids and ordinary categories and infinity categories. So what do I mean by that? So a category has a set of objects. An infinity category will then have an infinity groupoid of objects.
Starting point is 01:16:20 A category will have a hom set. This is this set of arrows from a particular object to a particular object. An infinity groupoid will have a spin. So an infinity groupoidal, infinity category will have an infinity groupoidal mapping space. But more than that, sort of the axioms that turn directed graph into a category, these are the things about the existence of a composition function and the existence of identity arrows, so that composition is associative and unital. All of these are expressed in first order logic with equality, so using the notion of equality between arrows. And all of that needs to be redeveloped,
Starting point is 01:16:59 in this sort of higher-dimensional and weaker setting. So composition in an infinity category as a morphism between infinity group-roids, it's not really a function anymore. Instead, you know, everything is kind of only ever well-defined up to a contractable space of choices, and the traditional foundations of mathematics
Starting point is 01:17:21 are just not very well equipped to turning this idea into a precise definition. So I'll give you some precise definitions, but what you'll see about them is they don't really correspond to this idea very well. So this is the intuition for what an infinity category is, but, you know, explicitly describing the composition and the associativity and the coherences and the weak inverses and stuff like that in the language of set theory, it's a bit tricky. So what I've displayed here are two definitions of infinity categories within set theory. The term of art in the field is these are called models of infinity categories, but, you know, model isn't really used in a logical sense.
Starting point is 01:18:18 What this just means is it's sort of a precise set theoretic definition, and these have technical names, which I'm going to use. So if you see them in the world, you'll have seen them before. So one way to express the idea of an infinity category is a one category where all the sets are replaced by infinity group voids is to say an infinity category is something called a quasi category. So what's a quasi category? It's a simplicial set. What's a simplicial set that is given by the data of sets X0, X1, X2, X3, X4, up to infinity with a whole bunch of maps between them that I all have special names, which I have in. indicated and satisfy certain composition relations, which I also have not indicated. So certain diagrams will commute with these maps. Other diagrams will not commute. So I did not make any of
Starting point is 01:19:06 precise, but the notion of Simplitial Set does make all that precise. And then moreover, every inner horn has a filler. And for space reasons, I'm not going to indicate at all what an inner horn is. So that's one definition. Another definition is it's a complete Siegel space, which is a bi-simplicial set. So this is sort of two different Simplicial set. sets commuting with one another. So we have now this grid of sets X and M for N and M natural numbers, again with a whole bunch of maps, some of which commute and some of which don't, so I need to indicate all of that. And then this bi-simplicial set has to satisfy three conditions. One is called really fibrant. Another says something about Siegel maps, and another says something about
Starting point is 01:19:48 completeness maps, and those conditions are that these maps are equivalences. So I could make all of that fully precise, but I won't. Because the point that I want to make with these definitions, which I think I can make anyway, is this, neither of them have any obvious relationship to the idea that I've just taken one category and replaced all the sets by infinity group boards. So with more time, I could explain that a little bit. But let's remember the framing for this talk is that this is supposed to be kind of infinity category theory presented for undergraduates. And this is not that. Yeah, this is definitely not for undergraduates. I mean, nothing against undergraduates.
Starting point is 01:20:28 It's just there's sort of a lot that's required to make any sort of basic thing precise. Right. You know, much less develop the theory of infinity categories. So at some point, we want to get back to like the co-products and the left ad joints and stuff like that. And, of course, it can all be done in this way. And this is how graduate students who learn about infinity category theory learn about it now. Right. But I don't want to take the perspective of a graduate student who's going to become an expert in infinity category theory.
Starting point is 01:20:57 I want to take the perspective of a mathematician in a different area who needs to use infinity category theory in their work, but doesn't have time to take a year away and learn this all properly. So, you know, there's, I've cited here some recent papers in different areas of mathematics, you know, some of which are kind of closer to algebraic to polypology. homotopy theory, infinity category theory, but others which are sort of further away that are, you know, more representation theory or number theory or, you know, symplectic geometry or whatever algebraic geometry. And, you know, maybe a graduate student in those areas wants to read these papers and so needs to learn some infinity category. But, you know, where are they going to find the time? So where will researchers in these areas find the time to learn infinity category theory? Or the sort of pithy framing that I started with, you know, how, you know, how? are we going to distill this to the point that it can be included as a standard part of the undergraduate curriculum, you know, make it easier to learn so that this can be part of somebody's toolkit by the time they get to graduate school and are trying to read the cutting-edge literature in their particular area. And so I mentioned this talk was going to involve kind of a dream for the future, and now I'll sort of state my dream. So I think that this would be
Starting point is 01:22:25 easier to explain if the foundations of mathematics, sort of if set theory and logic, weren't so far away. So the problem, you know, so the foundations of mathematics, the traditional foundations based on Zermelofrankel set theory with axiom choice and first order logic, you know, make it very, very easy to describe mathematical structures that are built from sets and functions. But unfortunately, infinity categories just don't have a lot of sets and functions involved in their definition. So to make them fully precise in the language of set theory, you know, requires kind of definitions like this that are precise in the language of set theory, but aren't obviously connected to the intuition of an infinity category.
Starting point is 01:23:14 So here's my thesis statement. So if the foundations of mathematics had some sort of higher structure, and to be precise, were something more like something called homotopy type theory, which I have not introduced, but I will, then so if that were sort of the background, implicit understanding of mathematics undergraduates, you know, like mathematics undergraduates today
Starting point is 01:23:38 don't necessarily learn the precise rules of set theory, but they know how to work in set theory. They know how to write proofs that are correct in set theory, what constructions are allowed, what sort of arguments are allowed. If the same were true, but with a more sophisticated foundation system with something like homotopy type theory, then we could teach infinity category theory to undergraduates, you know, much like we teach something like abstract algebra to undergraduates today. And so that's actually the plan for the rest of the talk. So I've explained in part what ordinary category theory is for and what infinity category theory is for. What I'd like to do next is give some sort of informal introduction to homotopy type
Starting point is 01:24:19 theory and then illustrate that with that as the background foundations, infinity category theory is actually not really more complicated than ordinary one category theory is today. And finally, additional advantage of having a more sophisticated foundation is that it's actually also possible to be so rigorous with your proofs that you're not just explaining them to undergraduates, but you're explaining them to a computer. And what I'm referring to specifically is something called a computer proof assistant where you can, uh, type in a very sort of precise sort of syntactic encoding of sort of definitions and theorem statements and proofs, and then ask the computer to check the logic in the proofs. And with this new foundation that has more of the higher structure sort of built into the basic objects, it's actually possible not just to teach infinity category theory to undergraduates, but to teach it to a computer,
Starting point is 01:25:29 to, in other words, produce a computer, a formally verified version of theorems in infinity category theory. After 19 years, they're back. Frankie Munis, Brian Cranston, and the rest of the family reunite in Malcolm in the middle, life's still unfair. After 10 years avoiding them how in lowest demand Malcolm be at their anniversary party, pulling him straight back into their chaos. Malcolm in the middle, life's still unfair. A special four-part event. streaming April 10th on Hulu on Disney Plus.
Starting point is 01:26:03 Okay, great. So to summarize, there's a concept called Infinity Category, and it's difficult to convey in the language that undergrads are already familiar with, namely Set Theory. And if you were to convey it to them, it would take them all the way until graduate school. But there's another language or another foundation called Homotafi Type Theory,
Starting point is 01:26:21 and if you were to know that, then you could express the concept of infinity category much more quickly, akin to how some words are in Mandarin it may take a paragraph to explain in English but if you're in Mandarin it's a single word. Something like that? Yeah, something like that.
Starting point is 01:26:39 That's right. If you have the right sort of background, foundation, intuition, language, you know, more powerful language that, you know, allows you to sort of more easily manipulate these kind of moduli spaces, this homotopical picture of the world that Manin was talking about,
Starting point is 01:26:55 then yes, you can at the same time be sort of rigorous, you know, rigorous to the extent that a computer would accept the proof and say you have not skipped any steps, but intuitive in the sense that we're reasoning from sort of simple definitions, you know, that an undergraduate could understand. Great. Okay. So let's get into homotopy type theory.
Starting point is 01:27:20 So what I'm going to try to do is give a crash course and a completely new foundation system for mathematics. You know, but we don't need to understand everything perfectly. You know, that's sort of a question for the experts to really understand how the foundations of mathematics work. What we're hoping to achieve is some sort of comfort in, you know, making mathematical statements and then proving them. Okay, so what is this homotopy type theory? So on the one hand, it's a formal system for mathematical constructions and proofs, where the basic objects are not sets anymore, but something called types. And we have some intuition for them, which is we might think of a type as being a space or maybe more precisely an infinity group way. So it's like a homotopy type.
Starting point is 01:28:08 And all constructions in this system are then automatically continuous or equivalence invariant. So this is a sort of desiderata in what's sometimes called kind of derived or homotopy coherent mathematics that is present in the system. So on the one sense, homotopy type theory is taking an existing formal system called type theory, or specifically dependent type theory of Martin Leif, which is developed in the 1970s as a foundation for constructive mathematics and giving it a homotopical interpretation, sort of a topological interpretation. But on the other hand, it's a theory of homotopy types. So you can see why this would be so useful for infinity categories. theory because infinity categories are built out of infinity group voids, aka homotopy types. And so if those were the
Starting point is 01:28:56 primitive notions in a formal system, then they're just kind of right there for using for definitions. And that's what we'll see. So type theory takes some getting used to. If you're coming with a computer science background, then you might be used to this already, and that's great. If you're coming from a mathematics background, this is going to be a little bit weird. So type and going forward everything that's written in red will be a type is used to describe a mathematical structure, but it's also used to encode a mathematical assertion. So when I'm stating a theorem, I will do so by constructing a type. And in the case of the structure, then when I have a term of that type, and going forward, terms will always be an orange. It's going to give like a particular
Starting point is 01:29:42 instance of the structure. But in the case where the type is encoding a statement of a theorem, A term is a construction of a proof of that theorem. So this is something that's sometimes called propositions as types. This is also referred to by the Curry-Howard isomorphism or Curried Howard correspondence. And so we're sort of unifying construction and logic, sort of construction and proof. And that's, to some extent, a good intuition, but it's also a little bit misleading because one of the things that we'll see about homotopy type theory is that these types can have higher dimensional structure, non-trivial higher-dimensional structure, just like spaces can have non-trivial higher-dimensional structure. So I'm thinking of things like loops and spheres and
Starting point is 01:30:25 things like that, higher homotopy. Okay. Okay, so let's introduce this formal system. So homotopy type theory, like dependent type theory, has some primitive notions. So these are the sort of basic core notions. So firstly, we have types. So for example, there's a type of natural numbers, there's a type of real numbers, there's a type of groups. These are all types of types. And then types have terms. So every term belongs to a specific type. This is why this is a useful language to use to communicate with a computer because it can kind of help the computer stay organized because the terms belong to a specific type. So for example, 17 is a term of type natural numbers. It's a natural number. There's a different 17 that's a term of the type real
Starting point is 01:31:12 numbers. So every term belongs to only one type. Squared of two is another real number, of course. The Kleinfor group is a term of type group. So that's, you know, an instance of that kind of structure. And then what it also has dependent terms, or are dependent types or type families. So to define a type, so a type can be allowed to depend on variable terms of previously defined types. What I'm showing now is the syntax. for that. So here, B is a family of types depending on a term X of type A, or C is a family of types depending on a term X of type A and also a term Y on type B, and this B might now depend on A. So that's dependent types. So for example, we have RN, Euclidean space, for any N a natural number.
Starting point is 01:32:06 So that's a natural example of a dependent type. So given any natural number, I can form the type RN, the n-dimensional Euclidean space. Given any natural number N, I can also form the type of fields of characteristic N. I chose this example because there are no fields of characteristic four, for instance. These only exist in the case of primes. But nevertheless, we can reason about the, in this case, empty type of fields of characteristic four. Also, if I have an arbitrary group G and an arbitrary field K, we can talk about representations of G over the field K. So that's an example of a dependent type family. And then finally, we have dependent terms. And the way I should interpret one of these dependent terms is really as a function. So if I have a family of types B depending on a type A, what a dependent term is is it's a function that would take an arbitrary term X of type A and return a term B of X of type B of X. So for example, in the family of Euclidean spaces,
Starting point is 01:33:20 so that's the family of types, RN dependent on a natural number N, I have a dependent term which picks out the zero vector in each Euclidean space. So for each natural number N, I can pick out the zero vector in RN. That's an example of one of these dependent terms. Or for each natural number n, I can define the ring Z mod N, which is a ring of characteristic N. I can't define a dependent term that inhabits this family of fields of characteristic N for the reason I mentioned before. So there are no fields of characteristic four or characteristic anything that's not a prime. But if I go to rings of characteristic N, now this is an example, it's a function that takes an N and produces a ring of characteristic N, namely Z mod N.
Starting point is 01:34:05 And similarly, given any group in any field, I can define the representation or the sort of regular representation. So that's an example of a representation of that group over that field. So those are some examples of dependent terms. And then what, so this is sort of the stuff that's present in dependent type theory or in homotopy type theory. And then we also have type constructors that build new types and terms from given ones. So for instance, given some types A and B, I can form their product type or their co-product type or their function type. There's something called identity types that we'll discuss. And then there's a more complicated type forming operations that take as input a dependent type. So given a family of types B over a type A, I can form something called a dependent sum or the dependent product. And whenever I form these new types, there's rules that explain exactly when you can form a new type. what the terms are of these new types, how I can use the terms of these new types
Starting point is 01:35:09 to construct other terms and sort of what happens when I both introduce a term and then eliminate from that term. So these are sort of the kind of high-level structure of dependent type theory and homotopy type theory in particular. So I want to give an example now
Starting point is 01:35:25 of some of these type constructors. So I'm going to explain, give an example of this way of forming new types and what the rules are in those cases. so we can see what that means. But maybe I'll see if there's any questions first. Yeah. In one of your previous talks on homotopy type theory is a foundation, you introduced the term judgments, and I just want you to specify what that would correspond to here. Right. So I'm being a little bit informal here, even though this probably seems like I mean quite formal. So the judgments
Starting point is 01:35:57 are, I mean, it's essentially the types and the terms and the dependent types and the dependent terms, these are, a way I think about a judgment is as, it's like a grammatically correct or a well-formed sentence in type theory. And so it's grammatically correct to say that I have a type or a term and a dependent type and a dependent term. And indeed, all of this can exist in an arbitrary context of variables from previously defined types. I've suppressed that from this slide. And that's what the sort of judgments are. So I'd have to be even. more precise about this if I'm explaining this to a computer proof assistant that's going to implement the rules of this formal system. But since we're sort of among humans right now,
Starting point is 01:36:44 maybe you'll forgive me to be a little bit lax. There was a hilarious comment. I believe it was on either an introduction to homoetopi type theory or an introduction to infinity categories. And someone said, this looks like a computer speaking to another computer. Okay. Well, I'm flattered, I guess, because I think that I'm means I'm being precise. Exactly. Great. Okay, so let's see some of this stuff in action.
Starting point is 01:37:11 So I'm going to introduce product types for you. And they're governed by the following rules. So the formation rule is the one that says when you get to form a new type. And it says that if I have any types capital A and capital B, then there's a new type, which I'll denote by A times B because that's the notation we use in mathematics for the product type. So the introduction rule then tells us what terms, we get in this new type, and it says that if I'm given a term little A of type A and a term little B of type B, then I can pair them together and get a term of type product type. And I've
Starting point is 01:37:44 introduced some syntax for the introduction rule that makes it seem familiar. We think of just pairing them together. Now I have an ordered pair, which is a term of the product type. And then the elimination rule says that if I had an arbitrary term P belonging to the product type, then I can, I think of it as like projecting away and get a term of type A and a term of type A and a a term of type B. So if I have an ordered pair, I can take its first component or its second component. And then there's these computation rules, which I'm suppressing, but they would say something like if I pair together a little A and little B, and then I project off to the bit in A, I recover the little A, for instance. Okay, so that's one of these type forming operations. So these
Starting point is 01:38:25 are the rules that you teach to a student or teach to a computer. And function types have sort of a similar set of rules. So similarly, if I'm given any types A and B, there's a type of functions from A to B. So here I've written A, arrow, B, all in red. And what that means is that's the name for a type, namely the function type from A to B. How do I get a term of a function type? Well, it turns out if I have a dependent term, so if I have, you know, by iteratively applying the rules of type theory, I can take an arbitrary term X of type A and produce a term B of X of type B. Then from that I get a function. And here I'm using the sort of computer scientist name for that function,
Starting point is 01:39:14 lambdax. For a mathematician, I would say this is the function X maps to B of X. It takes as input X and returns as output B of X. So that's how I produce. So if I have a function, I get a function basically. And what is a function? It's something that I can build by iterative. applying the rule subtype theory. So somehow when I describe this abstractly, it's sort of hard
Starting point is 01:39:36 to think about, but it's very intuitive. If you can sort of iteratively do stuff and then take an input and get an output, that's a function. Okay, so what's the elimination rule? It says that if I'm given a term of the function type, so F is a term belonging to this function type, and also a term of type A, then I could, you know, you think of just plugging it in. There's a, then you get a term of type B, which I'll denote as F of A. And then there's computation rules that relate these Lambda abstractions and these evaluations. So, you know, in particular, if I defined F by X maps to B of X, then this F of A should be B of A, for example. Okay.
Starting point is 01:40:19 So those rules are hard to remember. So I've put them again at the top of this slide. And what I want to illustrate now is this idea that I'm. introduced at the outset, which is that to prove a mathematical proposition and dependent type theory and homotopy type theory, what you do is you construct a term in the type that encodes its statement. So here's an example of this. So for any types A and B, what I've written here is a type. I've formed this type by iteratively applying the formation rules for product types and function types. Now I'm going to read the name of this type aloud using this sort of
Starting point is 01:41:00 Curry-Haward correspondence, which I'll show you in just a second. So I can read this type as A and A implies B implies B. And what I'm doing here is I'm recognizing that there's a very close relationship between the rules for the product type and the rules for the logical operation of conjunction or and. So, you know, how do I remember if I'm thinking of like a type is encoding a proposition, then a term of that type is a proof of that proposition. And so the introduction rule for product types, for instance, says that if I have a proof little A of proposition A and a proof little B of proposition B, then I can pair them together and I approve a proposition, namely A and B. So that's explaining the connection between the product type formation and the logical
Starting point is 01:41:55 notion of and. Similarly, there's a connection between the function types and the logical notion of implies. So this type that I formed here, I can think of as encoding the mathematical proposition A and A implies B implies B, which is one of these tautologies from logic. I can also, though, think about this as the type of functions from the product type of A with the function type from A to B to B. And so what I'm going to define now is an explicit function, so an explicit term of this type, that's both a function, but is also a proof of this tautology that's sometimes called modesponants. Okay, so thinking about what we're about to do is kind of the complicated bit,
Starting point is 01:42:39 the giving a proof of this is actually quite mechanical, and why is that? So my goal is to construct a term of a given type. The rules that I've displayed at the top of the slide tell you exactly what you're allowed to do. So this type is formed by iteratively applying the type forming operation. The outermost type forming operation is the introduction rule for function types, because that's sort of the type that's outside the parentheses. And so really my goal is to produce a term in a function type, and how do we do that? We apply the introduction rule for function types, which says that it suffices to assume that I have an arbitrary term P in the domain, so a term P that belongs to the product type A and A implies B, and now my goal is to define a term of type B,
Starting point is 01:43:28 because that's how the introduction rule for function types works. But now that I have a term of a product type, I can apply the product elimination rule, and that gives me a term of type A and a term of type B. These are the two projections of my term P. But now I have a term of type A and a term and the function type from A to B. And so by the elimination rule for function types, I can apply the function to the term and I get a term of type B. And what I've done now is I've taken my arbitrary term P in the product type and produced the term of type B. So that gives us a function. So I know this is kind of quick if you're seeing this for the first time, but I hope it also illustrates that it's really relatively mechanical. You can imagine, you know, an undergraduate who's just learning to write a
Starting point is 01:44:15 proof for the very first time, being reassured by the fact that there are these sort of mechanical rules that one just has to put together in the right order, and then there will give the proof. And moreover, if you're writing this proof interactively with a computer proof assistant, which is, you know, something I've alluded to, the proof assistant can check right away whether your proof is correct. So what I've done here is I've said, you know, how do I define this? So in the one hand, I've given a constructive proof of this logical tautology modus ponens. But I've also constructed an explicit function, and this function has a name. It's the evaluation function. So this is the function that would take a term of type A and a function from A to B and return
Starting point is 01:44:57 that function applied to that term. That's exactly what we did in this proof. And so I'm unifying, again, logic and construction. And this is very typical of what it is to do mathematics in independent type theory. You mean unifying proof and construction or logic and construction? I guess proof and construction. Yeah, that's right. Yeah. Great. So I mentioned this Curry-Howard correspondence in the setting of homotopy type theory. Let's call it the homotopy type theoretic rosetta stone. And this can be made sort of more precise. So if we're discussing about semantics of this homotopy type theory, which I don't really want to do today. Instead, let's think of this as sort of a table of intuitions. So on the left, we have the primitives of type theory. We have the types and the terms.
Starting point is 01:45:42 If I'm interpreting them logically, I can think of a type as encoding a mathematical statement or a mathematical proposition and a term as encoding a proof of that proposition. If I'm basing my intuition on set theory, then types are roughly analogous to sets and terms are roughly analogous to elements of those sets. But a better intuition will be in homotopy theory where I think of a type as being a space and a term as a point in that space. And then we have various type-forming operations. There's something called the empty type. There's something called the singleton type.
Starting point is 01:46:14 Those correspond logically to the false and true propositions or set theoretically to the empty set or the set containing a single element. Homotopy theoretically to the empty space or the space containing one point. And then we have these operations of product types, co-product types, function types that logically correspond to and-orren implies. Set theoretically correspond to taking product, of sets or disjoint unions of sets or sets of functions, and topologically correspond to the product space, the co-product space, or the function space constructions.
Starting point is 01:46:47 Right. And then we can add in the sort of dependent layers. So we had these dependent types or type families, which logically correspond to something called a predicate. So an example of a predicate on the type of natural numbers n is the predicate. N is prime. So for any particular value of N, that's either true or false. So zero is prime is false. One is prime is false. Two is prime is true. Three is prime is true and so on and so forth. So that's a family of, that's a predicate on the natural numbers. In set theory, this corresponds to something called a family of sets. And in homotopy theory, this corresponds to a notion of a vibration. So vibration is sort of very roughly speaking. It's like a continuous family of spaces. parameterized over a base space. The base space here is the space A.
Starting point is 01:47:40 The spaces B of X are the fibers of the vibration, the sort of spaces in the family. So this dependent term then is a conditional proof or family of elements. There's something called a section of the vibration. It's picking out a point in the fiber over each point in the base. That's something called a section.
Starting point is 01:48:03 And then the two more complicated type format operations that I mentioned at the beginning but haven't revisited, these are the dependent products and dependent sums, logically correspond to the quantifiers. So when we move from sort of sentential logic to predicate logic, we add the universal and ex-essential quantifier over a predicate. So that sort of turns a predicate into a sentence by kind of binding up the variables. In set theory, these are operations on a family of sets. You can form the product of a family of sets or the disjoint union of a family of sets. And in topology, this sort of product is taking the space of sections of a vibration, whereas the sum is taking what's called the total space. So if I think of having a continuous family of spaces and I glue them all together, that's sometimes called the total space of a vibration, which you think of as sort of sitting over the basis.
Starting point is 01:49:03 space where the inverse images of some points in the base space are these fibers that I was describing previously. And a section is a section of that map. That's why it's called a section. And then the final ingredient here is something called the identity types, which I'll talk about on the next slide. But one way to motivate them is this is sort of what's missing from the logic column. So if I'm explaining first order logic, the other thing I need to add is a discussion of equality, the equality relation. And another thing that types have are identity types in which these proofs of equality are going to be interpreted as paths from X to Y in the space. And this is really what's sort of fundamental from the homotopy interpretation. So until, you know, this
Starting point is 01:49:55 discussion of identity types, the sort of homotopy theoretic interpretation is kind of evocative, but, you know, maybe not so different from the set theoretic interpretation. But here's kind of a big point of distinction. So in set theory, if I have a term P in the identity type that's witnessing that X is equal to Y, where X and Y are supposed to be terms of type A, in set theory, I interpret that as a proof that X and Y are literally the same term, that they're literally equal. But in homotropy theory, I'm interpreting it more loosely as a path sort of through the
Starting point is 01:50:31 moduli space in which X and Y are points. You know, so it's a non-trivial relation between X and Y. It's saying sort of up to some sort of continuous deformation X and Y are the same, but they're not literally equal. And that's a very important flexibility, important and interesting flexibility. Okay. So let me say more about identity types, which are sort of the richest part of this story. And, you know, the rules of identity types in homotopy type theory go all the way back to Martin Leif, so these are, were written down, you know, over 50 years ago. And here they are. So the formation rule says that if I'm given any type A and any terms X, Y of type A, there is a type X equals Y. So remember, Orange here is a term, A is a type. So this X equals Y is a type. What the formation
Starting point is 01:51:28 rule is saying is that it's mathematically, it's grammatically correct to ask the question. It's sort of a valid mathematical question whether X and Y are equal when X and Y are terms of type A. So this is an improvement, I would say, on set theory. In set theory, it's grammatically correct to ask whether the number 17 is equal to the Kleinfor group. Because you know, when we're being really formal about it, everything in set theory is a set. And so it's totally valid to ask whether a number is equal to a group, even though mathematically, of course, that's nonsense. In dependent type theory, there's this formation rule that says it's only meaningful to ask whether X is equal to Y if X and Y are terms of the same type. Otherwise, it doesn't make sense.
Starting point is 01:52:17 Now, that doesn't say it's true. This is saying that it's a grammatical thing to ask whether X is equal to y. To prove x is equal to y, you would have to find a term in the identity type x equals y. And so that's what the second rule is doing. The introduction rule is saying that x equals x is always true. And in particular, it's providing a canonical proof called reflexivity that x equals x is true. So it's meaningful to ask whether x is equal to y may or may not be true. For x an X, it is always true, and the proof is called reflexivity. Okay, so if we had more time, I would explain that there's a very precise analogy, in fact, between identity types and the type of natural numbers. So both of these are inductive types. And so there's an induction principle for
Starting point is 01:53:07 identity types that's analogous to induction over the natural numbers or maybe more precisely recursion over the natural numbers. We don't have time to get into that, so I'm just going to introduce this out of nowhere. And the induction principle, this is the elimination rule, is something that we're going to call path induction because we think of a term P in the identity type X equals Y as a path. So I'm going to rebrand the elimination rule as path induction, and I'll state it for you using my sort of homotopy type theoretic Rosetta Stone or the Curry Howard correspondence. So what it says is if I have a family of types p that depends on two terms x of y of type a and a proof little p that x is equal to y,
Starting point is 01:53:56 then to prove that p of x of y of p is true. In other words, to find a term belonging to that type for any x, any y, any p, it suffices to do this in the special case where y is x and p is reflexivity. So in other words, path induction says that if for all X of type A, P of X of X of X of X of Raffle X is true, then it is the case that for any X of Y of type A and proof that X equals Y, that P of X, Y, and P is true. So that's something called path induction. And it's very, very fun. So what you should imagine is, you know, imagine you're learning about natural numbers induction for the very first time. You know, maybe this is in your past. Maybe this is in your future.
Starting point is 01:54:51 What you see is some very complicated formal statement called the principle of mathematical induction. And then what you learn sort of over time with experience is how powerful it is. So, you know, there's a lot of sort of universally true predicates on the natural numbers that can be proven using induction. You can also use recursion. That's the constructive form of induction to define functions out of the natural numbers by induction. And we're going to see a whole bunch of examples where I use path induction similarly. This is definitely a fun area to sort of get into and try to master yourself. So when I teach homotopy type theory, I have a problem set where the solution to every problem is path induction, just to give the illustration of how powerful this is. So we'll see some examples going forward, but this is, again, one of these things that you need to sort of get your hands dirty to get used to. And I should say there's a computation rule as well that I'm expressing. So like you might know for recursion over the natural numbers, if you define a function recursively, then you get some sort of equations related to that recursively defined function. Similarly, there's a computation rule here, but it's not really going to appear. So let's sweep that under the rug. Okay. So I mentioned the idea that in homotopy type theory, infinity groupboids are really the primitive concepts, and I want to explain where that idea comes from. So what we've seen is that a type A has terms, little X and little Y. And then I'm allowed to form the identity type, X equals Y. And let's
Starting point is 01:56:25 suppose I had multiple terms in that identity type. If I had a P and a Q in the identity type, Then there's a type, which is the identity type between P and Q, which are both terms and the type X equals Y of type A. So these are iterative, the identity type construction. And there's a theorem in this form due to Lumsdain or Garner Vandenberg, that the terms belonging to iterated identity types of any type form in an infinity group void. So in other words, I can think of a type in homotopy type theory as just being an infinity group void already, where the terms are the objects of the infinity groupoid. The paths, those are terms of the identity types, or the one-dimensional morphisms. And the paths of paths, those are terms of the iterated identity type or the two-dimensional morphisms, and we go all the way up. And an infinity grouproid, so the infinity group-roid, it's an infinity category where all of the one-morphisms are also invertible. So it has morphisms in every dimension, but also things like composition and inverses and identities and so on. So the identities are the constant paths or the reflexivity terms.
Starting point is 01:57:35 The inverses are defined by something that's called reversal or maybe symmetry if we're thinking logically. So a thing you can prove by path induction is that if you have a path from X to Y, then you get a path from Y to X. Concatenation is also something you define using path induction. This could be thought of as transitivity of equality. If you have a path from X to Y and a path from Y to Z, then you get a path from X to Z. And moreover, there's associativity and unitality and higher coherences and sort of all the way up, all of which is proven from the path induction principle. So it's giving this sort of infinity groupoid structure. And remember how we define the fundamental infinity groupoid inside a topological space. It's exactly the same
Starting point is 01:58:15 construction. Sort of under the homotopy interpretation, the terms are like the points of the space, the paths are like the paths of the space, the paths of paths are like the homotopy's in the space, and just like that assembles into an infinity groupoid, we can think of this stuff in for a type in homotopy type theory is defining an infinity groupoid. So the whole sort of infinity groupoid structure of a type is just there already. And now we can leverage this sort of homotopy theoretical perspective on type theory to give some new definitions into type theory. So when I have this interpretation, it's providing new intuition on the formal system, homotopy type theory. And the first example I want to give is the notion of what's called a
Starting point is 01:59:00 contractable type. So let's let A be any type. And I can form the following type using the dependent sum and dependent product and identity type operation. Under my homotopy type theory rosetta stone, I can call this type, give this type a name. This is the type that says there exists some A and A, so that for all X and A, A is equal to X. And this type, on the one hand, it kind of sounds like it's asserting that the type A has a unique term, because if you think about the logical statement that I just said, you know, there exists a term, little A of type A, so that for all terms X of type A, A is equal to X. That sounds like the way that you would prove that something exists uniquely.
Starting point is 01:59:48 You know, you find a term and then you show that it's the only term. But under the homotopy interpretation, it's, in fact, asserting that this type is contractable. So if I use the elimination rules to sort of get data from a term of this type that I've just defined, one of the things I would get is a term of type A, which you can think of as the center of contraction. So this is a specified point in my space A. And the other thing that I would sort of project out from a term of this type is a dependent function. So that's a term in this product type. that for all X in A gives a contracting homotopia or a continuous choice of paths from C to X. So a way to show that a space is contractable is to find a point, the center of contraction, and then to define a homotopy between the constant function at that point and the identity function, which you think of sort of, it's kind of like a reverse big bang.
Starting point is 02:00:42 So you're continuously moving all the other points in your space down to the center of contraction. that's exactly the data that we have here. So under the topological interpretation, this is saying that the space A is contractible. Within the formal system, though, it feels like saying that the type A has a unique term. And this is, in fact,
Starting point is 02:01:02 what's so wonderful about this homotopy interpretation. So these contractable types, so those are the types that have a term in this type that says there exists some A and A, so that for all X and A is equal to X, they form the bottom level of a hierarchy of types due to Vladimir Voivotsky is one of the pioneers and homotopy type theory.
Starting point is 02:01:26 And so these are sort of giving a filtration on types in terms of their higher structure. So a type is a proposition if, for all terms X and Y, the identity type is a contractable type. and a type is a set if for all X and Y, its identity type is a proposition. And a type is an N plus one type if for all X and Y, its identity types are N types. And, okay, so what is this in practice? So the contractable type, sort of the prototypical example of a contractable type is the singleton type or the unit type that has, you know, freely it has one term and sort of nothing else.
Starting point is 02:02:13 So that's the idea of a contractable type. You can think of it as just having one point. So contractable types are also propositions, but there's a new proposition, namely the empty type. That's the thing that we think of logically as corresponding to false. So that's a proposition. And then the propositions are examples of sets, but there's other sets. So for instance, the natural numbers is a set or a zero-dimensional type.
Starting point is 02:02:40 Then we can go up to the level of one type, So a prototypical example of a one type is the circle. If we go up to a two type, there's something called complex projective space. So this is something that a homotopy theorist would recognize as a sort of Eilenberg-McLean space sort of at level two. And then these spheres, the higher dimensional spheres, S2, S3, S4, et cetera, these aren't truncated. So they're not N types for any N. And what that means is they have sort of non-trivial higher dimensional structure. is not a level where the iterated identity types are all just contractable or something.
Starting point is 02:03:20 They can have interesting non-trivial structure all the way up. And that's why the homotopy groups of spheres are so complicated. So a way to think about all of this is, you know, I think this gives some perspective on the relationship between sort of traditional foundations and homotopy type theory as foundations. So traditional foundations are about kind of the propositions and about the propositions and about the sets. And those are contained within homotopy type theory as the layer of minus one types and the layer of zero types. Those are where the kind of propositions in the sets live. But in homotopy type theory, as primitives, we also have all of these higher dimensional types
Starting point is 02:03:57 that live here as well. And, you know, they're all sort of kind of on equal footing in this formal system. So they're sort of equally accessible to use in more complicated constructions. and we'll see that when we transition to infinity category theory very soon. Great. Okay. So I want to give two, I think, more definitions in homotopy type theory. So the next thing I want to mention, so this is another example where we're taking a definition from homotopy theory and importing it into type theory, which is the notion of an equivalence between types. So this is modeled after the definition of a homotopia equivalence between spaces.
Starting point is 02:04:37 So if A and B are types, an equivalence between them is defined as follows. So the type of equivalences is the type of, which I'll read here using the hominitone type theoretic Rosetta Stone. So for there to be an equivalence between A and B, firstly, there needs to exist a function F from A to B. and then there also needs to exist a function G from B to A, and then G is going to be the the left inverse of F up to homotopy, which means that for all little A of type A, G of F of A is equal to A. In other words, there's a path or homotopy from G-composed F back to the identity.
Starting point is 02:05:26 And also there exists a function H from B to A, so that H is a right inverse up to homotopy, meaning for all B of type B, there is a path from F of H of B back to B, or F of H of B is equal to B. There's a homotopy from that composite to the identity. Okay, so from the data of an equivalence, if I had a term in this type that I've written all in red, what I would get is a function F from A to B, a pair of functions, G and H that both go from B to A, and then some homotopies that relate the composites, G-compose F, and F, composed H to the identity function. and using this data that I can show that G and H are equal, or at least up to homotopy. There's a homotopy between G and H.
Starting point is 02:06:12 So this is the sort of definition of an equivalence that gets used in homotopy type theory. It's not quite the definition that you would expect. So the definition that's sort of closer to the definition that gets used in algebraic topology is the following. You know, to say that F is an equivalence means, well, just that there's a sense. single function G going in the other direction that's both left and right homotopic, so that G composed F is the identity and F composed U is the identity. This is the definition that you would expect. And these two definitions are what's called logically equivalent, meaning that if you have a
Starting point is 02:06:50 term of the first type, then you get a term of the second type. And if you have a term of the second type, you get a term of the first type. So that's the notion of logical equivalence. But this second type is not a proposition. It might have non-trivial higher dimensional structure. So there's sort of an additional test for definitions in homotopy type theory that, or there's something sort of additional that you pay attention to that maybe you didn't pay attention to in set theory. So what it means for this first type to be a proposition is that if I had two different constructions of an equivalence between A and B, then those are equal. So if I have two, it's sort of a yes or no question whether A and B are equivalent via the first definition. But in the second definition,
Starting point is 02:07:43 I could have two different proofs, say that F is an equivalence. Sorry, I misspoke slightly. The thing that's a proposition is the type that asserts that a given function F is an equivalence. So in the first case, any two proofs that a given function, F, is an equivalence, are equal up to homotopy. In the second case, that's not the case. I can have two different inverses using the data of the second definition to F that are not equal. Right. And so this type is less well behaved because it has non-trivial, or potentially has non-trivial higher structure, depending on what the types A and B are. So this is an interesting feature of homotopy type theory, is it sort of asks you to be a little bit sort of more rigorous with your definitions, I guess, or sort of more disciplined with your definitions, thinking about the data that you're actually asking for and whether it's overdetermined or determined at just the right level.
Starting point is 02:08:47 And then the final thing I'll say in homotopy type theories, I'll briefly mention the univolence axiom. This is the other main contribution of Vladimir Vovatsky. and I'll give the statement. So another feature present and dependent type theory is a universe of all types. So really, it's a hierarchy of universes because of sort of size issues. By Russell's paradox, you know, really you have a sort of an infinite hierarchy of universes where the things in the lower universes belong in the higher universes and so on and so forth. But let's ignore that. So I can regard what this allows you to do is regard types as terms belonging to an appropriately
Starting point is 02:09:25 sized universe. And this gives us two different notions of sameness between types. On the one hand, we just define the notion of equivalence. So we can say that types A and B are equivalent if they're equivalent. So if I have a term in this type of equivalences. But you might also think that A and B are related in some way if there's a path from A to B in the universal types. In other words, if I have a term in the identity type, A equals B. And what Voivotsky, well, so firstly there's a map between these types. So by path induction, there's a canonical way to turn a path from A to B into an equivalence between A and B. What Voivotsky observed is that it's consistent to add the following axiom to dependent type theory. And this axiom really is sort of what makes homotopy type theory, homotopy type theory. And the axiom is that this particular explicit function is a in fact, an equivalence. So not only can I turn paths in the universe of types into equivalences between types, if I had an equivalence between types that defines for me a path in the universe.
Starting point is 02:10:32 The slogan is that identity is equivalent to equivalence, and this has a whole bunch of wonderful consequences that we don't have time to get into here. You know, for instance, one consequence of this is it helps us identify, it characterizes identity types of whole bunch of other types. So the identity type and the type of groups, for instance, is equivalent to the type of group isomorphisms, which is exactly what you would want it to be. It follows that groups is an example of one of these one types in the hierarchy of types that I mentioned. Other consequences have to do with sort of the equivalence and variance of all of the constructions in homotopy type theory, which is something I mentioned. This is really a very
Starting point is 02:11:14 powerful axiom. That's very natural from the homotopical point of view. And totally, totally wrong from the set theoretic point of view. So this axiom is inconsistent with the set theoretic interpretation. It requires the interpretation of types as homotopy types, not as sets necessarily. Okay. So that's a crash course in homotopy type theory. Great. Now, Vladimir introduced this, as you mentioned, because of its power. But is there something that it excludes that was important? That is no longer the case that you think, well, sure, it's powerful, but we're missing this important piece? Yeah, I mean, I guess my point of view on foundations is that we don't need one foundation
Starting point is 02:11:57 for all of mathematics. So, you know, this axiom is really forcing, you know, the rest of dependent type theory to be interpreted kind of homotopy theoretically. So it's, this is, you know, enforcing this perspective that not all types are sets, you know, the types are allowed to have non-trivial higher dimensional structure. But dependent type theory without univalance is still a very powerful and interesting formal system. In fact, in the computer-proof assistant lean, which is used by a lot of mathematicians, that is based on dependent type theory and has every thing that I've mentioned today about homotopy type theory with the exception of univalance.
Starting point is 02:12:46 So univalance is false in lean. But, you know, these other sort of intuitions about thinking kind of homotopy theoretically about types in lean, I still find quite useful, even though formally speaking, the type theory in lean is not homotopy type theory, but just ordinary dependent type theory. Okay. Let's do some infinity category theory. Okay. So the perspective I want to take now is that this is all sort of familiar and kind of intuitive,
Starting point is 02:13:17 of, you know, maybe not familiar formally because we haven't studied formal type theory or anything like that, but, you know, we're fairly used to sort of stating, seeing mathematical statements written in the language of homotopy type theory and giving proofs in the language of homotopy type theory. And I want to remind you of the thesis statement, which is that if those were the background foundations of mathematics, something more like homotopy type theory instead of said theory and logic, then we can try to teach infinity category theory to undergraduates. So everything that I'm going to say now can be made rigorous in a particular extension of homotopy type theory called Simplicial Type Theory. So this is something that was introduced in a paper I wrote with Mike Schulman. So it's extending homotopy type theory by adding something called Simplicial Shapes and Extension Types. And I will briefly on the next slide describe this formal system Simplicial Type Theory. I'm not going to describe it super formally because we don't have time for that. And also, you know, if you were an undergraduate learning infinity category theory, you probably would be introduced to the system informally rather than formally. So this will be sort of more accurate to that experience. You know, so you wouldn't need to understand the full details of this formal framework. You know, that's something if you're, you know, go into logic in grad school or, you know, go into foundations later on you might get into. But you just need to learn how to construct definitions and proofs.
Starting point is 02:14:49 So this next slide where I describe this formal system, Simplitial Type Theory, is going to be unsatisfying. But I'm hoping that, you know, the sort of charisma of the instructor, you'll sort of take some things for granted. And we'll try to get into doing mathematics in the formal system without stressing so much about exactly how everything works. But there is a paper. And I'll include the reference at the very end. Great, and I'll put a link to that in the description as well. Awesome. All right, so what is this Simplicial Type Theory? So we have types, and we also have dependent types, so types can depend on other types. But in Simplicial Type Theory, they can also depend on these things which are called shapes,
Starting point is 02:15:27 which I think of as being polytopes that are cut out of directed cubes by some sort of syntactic formula. And here's a bunch of examples. So the primary example you should think about is the topological n-symplex, which you can think of as living inside an n-dimensional cube. And if I imagine my n-dimensional cube as, you know, given in coordinates, so T-1 through T-N, where these T-Is are numbers between zero and one, I mean, really, these are terms in a sort of syntactically defined interval. If I restrict to the points that satisfy the inequality, which says that, you know, T, so these T-Is are always between 0 and 1. So TN is less than equal to TN minus 1 is less than equal to TN minus 2 is less than equal to
Starting point is 02:16:17 et cetera, T1. That carves out a region inside the cube and it has the shape of an N-symplex, so an N-dimensional triangle. And so that's the prototypical example you should think about as a shape. What this syntactic language allows you to do is carve out these simplical shapes, so the simplicies in all dimensions, but also different sub-shapes. And so if we look at the shape inclusion below. So the two simplexes, again, it's carved out of the directed square by the inequality T2 is less than equal to T1. If I add further side conditions, like for instance, the condition that says either T2 is equal to zero or T2 is equal to T1 or T1 is equal to one, now I'm in this two simplex, but also on the boundary. And so this is how I describe the boundary
Starting point is 02:17:08 inside the two simplex. And similarly, I can define a horn in the boundary. So this is given by two of the edges, the edge T2 equals zero or T1 equals one, but not the diagonal edge, T2 equals T1. So this language allows you to introduce these simplicial shapes and their sub-shape. That's kind of what we need to take away from this slide.
Starting point is 02:17:33 So these are the shapes and types and terms can depend on these shapes as well as other types. And then there's one new type-forming operation in Simplicial Type Theory, which is something called extension types. And what it says is that if I'm given an inclusion of shapes, so like a Simplex and some sort of sub-complex of a Simplex, and also a type, and also a function from the smaller shape into the type, then I get to form a new type, which I've denoted here in some sort of cartoon way. We use different syntax in the paper. And the terms of this extension type are then functions from the larger shape into A that restrict to the given function on the sub-shape. So that's the sense in which it's an extension.
Starting point is 02:18:22 This F is defined on the larger shape, P. But if I restrict to the smaller shape phi, then F is equal to the original function A. So it's an extension of the function A to the larger shape. and there's a whole bunch of things you want to know about these extension types. Like, for instance, if I extend from one shape to another shape and then extend further to another shape, it turns out that's equivalent to extending straight from the smaller shape to the larger shape. And all of these things are actually theorems that you can prove in the system. So we don't have to introduce a lot of axioms about these extension types,
Starting point is 02:18:54 so we can just prove the equivalences that we need. Okay, so that's the introduction to the Simplicial Type Theory. Let's now do some infinity category theory. So when we talk about categorical structure, we're going to need, you know, so an infinity category will have an infinity groupoid of objects. Those will, in fact, just be the terms of the type. But it also needs HOMS, so it needs an infinity groupoid of arrows, and they're defined using one of these extension types. So A here is a type that we're going to make into an infinity category momentarily. And in this framework,
Starting point is 02:19:34 work, it has a family of HOM types depending on two terms of type A. So if I had a little X and a little Y in type A, then I can use that to define a function from the boundary of the one simplex into A. So the one simplex is really just a directed arrow. And its boundary is given by two different points. And so if I map the first point to X and the second point to Y, and then extend from there to a one Simplex. Now a term in this extension type is an arrow that starts exactly at the X and ends exactly at the Y. And so this hom type, one of these extension types, is this mapping space or mapping infinity groupoid in A from X to Y. So we're also working in sort of a homotopy type theoretic framework. So a type also has a family of identity types or path spaces. And I'm going to connect them to the hom types in a moment. But right. Right now we think of a family of a type is having two different families. We have the type of arrows and we have the type of paths. At the moment, they're unrelated, but they won't be for long. Okay. So what makes a type into something that I'm going to call a pre-infinity category is the following definition. So I'll say that a type A is a pre-infinity category if every composable pair of arrows has a unique composite. So what do I? I mean by that. So if I have three terms, X, Y, and Z, and then I have an arrow from X to Y and an arrow from Y to Z. So it's because the source and target matches up, that's what makes this composable.
Starting point is 02:21:12 Then it has a unique composite. So if I just read this definition like that, every pair of composable arrows has a unique composite. That seems exactly like one of the axioms in ordinary one category theory. If I have an arrow from X to Y and an arrow from Y to Z, there's a unique composite arrow from X to Z. I mentioned that this framework is going to allow infinity category theory to feel a lot more like ordinary one category theory. And this is the first illustration of that. So any type of arrows has a unique composite. But remember, the way that we say something or other has a unique, a type has a unique term in homotopy type theory is to say that that type is contractable.
Starting point is 02:21:56 And so what feels like I'm saying that something or other exists uniquely is really in the homotopia interpretation, asserting that the moduli space of all composites of F and G in A is a contractable space. And that is in fact what is true for infinity categories. So I get to pretend or feel like I have a unique composite, but sort of semantically, when I interpret this back in traditional foundation, What I'm really asserting is that I have a contractable space of composites, and that's good because that's the notion that I'm trying to define, a category with a composition uniquely defined up to a contractable space, is exactly the axiom that's being asserted here. And what that says precisely is that this extension type, so this two one horn is a sub-shape of the two-symplex given by one boundary followed by the other boundary with two directed arrows. and so if I map via the first one into A using the arrow F, and I map via the second one into A using the arrow G, then that's a map from the 2-1 horn into A. When I extend that to the 2-Simplex, what I'm getting now is a new arrow, namely the sort of diagonal of the 2-symplex, together with a higher-dimensional composition witness. So when this condition is satisfied, when I have a contractable type, I get to project out a center of contraction.
Starting point is 02:23:26 And that's giving me in particular this map from this two simplex into A. I think of that as being the composition witness. And if I restrict to the other edge, now I'm going to get an arrow from X to Z in A, and I'm going to denote that by G-composed F, and that's how you define the composition function. So in this formal system, I have a composition function, just like I did for Ordinary One categories. and also this composition is uniquely defined in the sense of homotopy type theory, meaning up to a term of an identity type. So this is great. So the first axiom of an infinity category is it's a type in which every pair of composable arrows has a unique composite.
Starting point is 02:24:12 Okay. So what else do I need to get ordinary categories? So what we're seeing is that the definition of infinity category and ordinary category is a is a lot closer, certainly than it was when I tried to describe these in set theory. So, I mean, one thing ordinary categories have his identities in ordinary category theory that's an axiom. Here, that's actually a lemma. So I can prove in this framework that any type has an identity arrow. So for any X, what I'm looking for is a sort of canonical arrow from X to X. That will be the identity arrow. and I can define that just as the constant function.
Starting point is 02:24:52 So the way I construct a term of one of these extension types is I need a function that takes an arbitrary term in this shape, Delta 1, and returns a term of type A, but I'm just going to use the constant function. Just always send it to X, and I'm going to call that the identity arrow. And when my type A is a pre-infinity category, I can prove then that F is equal to F composed with the identity at X, and F is equal to the identity arrow. identity at Y composed F. So these are the unit axioms for identity arrows, and I can prove them because essentially I can construct a map from a two-simplex into A that has the correct
Starting point is 02:25:34 boundary conditions. So essentially by taking my two-simplex that is embedded into a two-dimensional square, and if I project away one of the dimensions, so either the first dimension or the second dimension depending on which identity axiom I'm looking for, that defines a function so that on one boundary edge of the two simplex, I have an identity arrow, and on the other two boundary edges, I have a given arrow, F. And that then is a witness to this axiom that F is equal to F composed at X and F is equal to the identity at Y composed F. So those aren't actually parts of the definition, because they sort of follow from this definition of being a pre-infinity category, which is the thing that says that binary composition exists uniquely.
Starting point is 02:26:22 And similarly, associativity is actually a theorem as well. It's something that you can prove. I don't need to assert as an axiom. And how do we prove? So the statement is that if I have four terms, XYZW, and then composable arrows F, G, H, then if I use this binary composition operation that I define, in a pre-infinity category, I can get an arrow H-composed G-composed F, that's an arrow from X to W, and I can also get an arrow F-composed, H-composed G, that's an arrow from X to W.
Starting point is 02:26:57 These are two terms belonging to a common type, so I have an identity type between them, and I can construct a term of that identity type. And how do we do this? So this proof uses a theorem, which is that if A is an infinity category, then the type of functions from the one simplex into A is also a pre-infinity category. And in particular, that type has unique composition. So what I've drawn here are some composable arrows in the pre-infinity category of arrows in A. One is built out of F and G and the composition witnesses.
Starting point is 02:27:35 So you should think of this as a two-dimensional diagram where all of those triangles are filled in by these specified two simplicies. and then the second arrow is built from H&G and its composition witnesses. So when I compose these arrows in this pre-infinity category of arrows in A, what I get is now a two-symplex in the pre-infinity category of arrows in A. Sort of by currying, this is a map from the prism formed by Delta 2 cross-Delta 1, a 2-symplex cross a 1-symplex, and that's the shape that you're seeing on the left here. And in particular, I can extract from this prism, this delta two cross delta one, a three simplex. That's what's highlighted in Orange right now. And along the spine of this three
Starting point is 02:28:21 simplex, I have my original arrows, F, G, and H. Then along some of the other edges, I have my composites, my binary composites, G composed F, and H composed G. But I also have this diagonal edge, L, that goes from X to W, and then witnesses that L is equal to the composite, these are two simplex witnesses that L is equal to the composite of H-composed G with F, and also H-composed-G-composed-F. And by transitivity of equality, then it follows that H-composed-G-composed F is H-composed-F. So that's a proof of associativity. Okay. So this is all still about pre-infinity categories. I should, if we're going to do infinity category theory, I should tell you what an infinity category is. And there's one more axiom, and this is, axiom is going to connect the
Starting point is 02:29:07 paths in my pre-infinity category to the isomorphisms, which I'm going to define now. So an arrow in a pre-infinity category is an isomorphism if it has a two-sided inverse. But again, the type that just says there exists some G going from Y to X so that both composites or identities is not a proposition. So it's better to say that F is an isomorphism if it has a left inverse and a right inverse. So this is for the same reason as when we're discussing equivalences momentarily. And so in particular, given two terms and a type, I can define the type of isomorphisms, sorry, in a pre-infinity category, the type of isomorphisms is defined in this way. It's the type of arrows from X to Y that are isomorphisms, meaning
Starting point is 02:29:56 there exists an arrow from Y to X so that the composite G-compose F is the identity, and then there's a separate arrow from Y to X so that the composite F-composed H is also the identity. So that's the type of isomorphisms. And as in our discussion of univalance, there's an analogy between the axiom I'm about to introduce in the univolence axiom. There's a canonical map from the identity type in A to this type of isomorphisms defined by path induction. And what turns a pre-infinity category into an infinity category is the second, so a pre-infinity category just says that every composable pair of arrows has a unique composite. An infinity category is one of those so that in addition every isomorphism is an identity, meaning that this map is an equivalence. So as I mentioned beforehand
Starting point is 02:30:45 in the Simplicial Type Theory, my types have sort of two directions. I have the family of arrows, and I have the family of paths. In a infinity category, the family of paths is equivalent to the family of isomorphisms that are defined using the arrows. So this is an axiom that's maybe a bit unexpected if you're new to this area. But for experts in infinity category theory, RESC has a notion of infinity category, a model of infinity category called complete seagull spaces that I mentioned briefly above. And this is the completeness axiom of complete seagel spaces. So it's familiar to folks who know about this sort of thing already. Okay. So I'm going to wrap this up by revisiting co-products and adjunctions. So we saw in the general discussion of category theory that, you know, an example of a theorem that one might want to prove in category theory because there are a lot of applications is that left adjoint functors preserve co-products. And the same proof that we gave before now works in the infinity categorical setting because our background functions.
Starting point is 02:31:57 foundation system is doing so much work. So I'll revisit the definitions and also interpret them in the setting. So if A is an infinity category and I have two objects, I'll call them little A and little A prime, what it means for them to admit a co-product is exactly as before. There needs to be another object, and then arrows from A into the co-product and from A prime into the co-product, so that the function that you get by composing with these arrows defines an equivalence. So in other words, there's an equivalence between the arrows out of the co-product into X and then pairs of arrows from A into X and from A-prime into X. That's defined by restricting along, or composing with the co-product injections. So that was the notion of a co-product. And the notion of a junction
Starting point is 02:32:46 involves two different pre-infinity categories, A and B, and a pair of functions between them. previously I said functors between them. Here I'm saying functions because it turns out that any function between infinity categories can be regarded as a functor. So this is another instance where in fact, some things become simpler in this formalism than in set theory. So any function between infinity categories is really a functor. And what it means to be an injunction is just I have this sort of family of equivalences.
Starting point is 02:33:21 So for all little A and A and all little B. in B, the type of arrows from F of A to B is equivalent to the type of arrows from A to U of B. And what I want to point out is there's something that's actually simpler in both of these definitions than in the one categorical counterpart. So one of the things that I emphasized in the one categorical counterpart, because we were using the Unatea lemma, is that I required these equivalences to be natural in the variable X in the first case and then the variables A and B in the second case. But the naturality condition is dropped here because it's automatically true. Interesting. There's a theorem that having these
Starting point is 02:34:05 family of equivalences makes them natural. Naturality is natural here. Yeah, yeah. Naturality is, yeah, naturality is for free. That's right. Yeah. And so I mean, essentially like the only constructions in set theory that you could redo in the this framework are those that were natural. That's kind of the right perspective to take on it somehow. This is something to do with the fact that mathematics in homotopy type theory is sort of natively constructive. There is a way to introduce some classical reasoning principles like axiom of choice and law of excluded middle, but sort of the native definitions are all constructive, and that's that's sort of why these things end up being natural. Okay. So now I can prove that left adjoints
Starting point is 02:34:51 between infinity categories preserve co-products and infinity categories. So if A is a infinity category with little A and little A prime admitting a co-product, then F of that object isomorphic in the infinity category B to F of A co-product F of A prime. And the proof is the same as before. So we'll use the infinity categorical unnatalema, which I haven't discussed, but works the same way. So to show this isomorphism, it suffices to define a family of equivalences here. I do not need to say natural because the naturality is for free. I just need for all X and B to give an equivalence between arrows out of F of the co-product into X and arrows out of the co-product of F of A and F of A-Priam into X. And I construct that equivalence using the equivalence is provided from the co-product
Starting point is 02:35:40 definition and the adjunction definition. So starting with an arrow from F of the co-product of A&A prime into X, I can transpose across the adjunction and get an arrow out. out of the co-product of A and A prime into U of X. Now I can use the universal property of the co-product to break that apart into two arrows from A to U of X and A prime into U of X. Then I can transpose back across the adjunction, and now I have an arrow from F of A into X and F of A prime into X, and then I pair them back together using the universal property of the co-product
Starting point is 02:36:10 to get an arrow from F of A, co-product F of A prime into X. And that is the proof. That is the same proof again. Okay. So that's some. infinity category theory in homotopy type theory. Professor, thank you so much for this whirlwind tour. This is fantastic. Awesome.
Starting point is 02:36:33 Well, there's one brief epilogue, if you want to see it. Okay, so a brief epilogue is infinity category theory, not for undergraduates, but for computers. And so there's a new practice within mathematics of checking one's work rigorously by formalizing it in a computer-proof assistant. So you might have heard that there are some large libraries of computer formalized proof. So Lean has a library called Mathlib. There are also libraries and other computer proof assistants. I've mentioned some
Starting point is 02:37:11 homotopy type theory libraries here, Unimath and Cox Hot library. But these don't really contain a lot of infinity category theory. And it's interesting to think, why not? I think I've sort of previewed the reason. And it's, you know, for my point of view, the issue is that the traditional foundations of mathematics are just kind of too far away from infinity category theory. They're not really suitable for what's sometimes called higher mathematics or higher structures because, you know, the basic objects are not really built out of sets and functions. They're built out of sort of more complicated homotomy types and, you know, functions that aren't so much well-defined, but maybe well-defined up to a contractable space of choices. And I explained some of these ideas in an
Starting point is 02:37:57 expository article that has a similar title to the title of this talk. But the simplical type theory that we just introduced informally is a formal system. And if you teach that formal system to a computer-proof assistant, then you can type the proofs that we just cave into the computer-proof assistant. And this proof assistant exists. It's called Resk or RZK. It was written by Nikolai Kutasov, just kind of for fun. So this is written in Haskell. There's a link to it on the web. And all of the results that are mentioned above and much more besides have been formally verified. So we've checked all of these proofs interactively in a computer proof assistant that knows the full rules of this formal system. So about the Simplicial Shays.
Starting point is 02:38:47 and the extension types and more details that I didn't have time to explain. So, for example, we saw this definition of a pre-infinity category. So a type is a pre-infinity category. If every composable pair of arrows has a unique composite, meaning some type is contractable. And what I'm showing now is a screenshot from our formalization of the definition of what it means for a type to be a pre-infinity category. So this U here is the universe of of types. So is pre-infinity category is a predicate on A of type U? In other words, it's a predicate on types, and it returns a type, and it's the type that asserts this mathematical statement for all X, Y, Z terms of type A, and then for all arrows F from X to Y and
Starting point is 02:39:37 all arrows G from Y to Z, this extension type is contractable. Here, the extension type is sort of written in a slightly different way than on the slide, but it's equivalent to the type displayed. So prior to formalizing this definition, we had to formalize the notion of contractable types from homotopy type theory. That's what this is contractable refers to. So I mentioned that in a pre-infinity category, so under the hypothesis that A is a pre-infinity category, then I get a composition function and also a witness to the composition. And here are the constructions of those. So this is extracted from the data of the proof that A is a pre-infinity category or the hypothesis that A is a pre-infinity category. So the composition says that if I'm given a type A satisfying the hypothesis that A is a pre-infinity category, then if I'm given X, Y, three terms of type A, and an arrow from X to Y, and an arrow from Y to Z, I get an arrow from X to Z.
Starting point is 02:40:39 and I just get it by sort of projecting out this witness to the contractability, essentially. So this first first is giving some projections from a sigma type to that witness. And similarly, if I did a slightly different projection, I would get a two-dimensional arrow. That's what this HOM2 refers to, that fills the triangle formed by F and G, and then the composite of F&G. So these are the screenshots of the formalizations of that. And then the final thing I wanted to show you is a proof of uniqueness. So a thing that I mentioned is that this composition is uniquely defines. And again, it follows from the contractability of this type.
Starting point is 02:41:24 So in other words, if A is a pre-infinity category, I have three terms X, Y, and Z. I have an arrow from X to Y, an arrow from Y to Z, and an arrow from X to Z, and then a witness, so one of these two simplicies whose boundary is F, G, and H, then I get to conclude that this H is equal to the specified composite. So in other words, if H's could be regarded as a different composite via a different two-simplex, there's a little picture you can see off to the right. If there's another triangle filled in by a two-simplex with boundary F-G and H, so H is a composite, then H is equal to the composite that was specified by the composition function. So this final, I'm producing a term in an identity type saying the composite is equal to H.
Starting point is 02:42:15 And the proof is a little bit hard to read if you don't know how these things are defined, but this is the proof in full. What it's using is sort of the contracting homotopy part of is contractable. So I mentioned that the data of a term witnessing that a type is contractable provides a center of contraction. that was used to define the composition function and the composition witness, but also a contracting homotopy, and that's what's used here to define the uniqueness. So anyway, this is all fully formalized, and I'll conclude with some sort of observations in future work. So I have also been involved in a project to formalize some infinity category theory in Lean. You could join me on the Lean Zulip chat if you want to hear more about that.
Starting point is 02:43:01 But this method where we're not formalizing infinity category theory in traditional foundations, but instead in something based on homotopy type theory, I think, is really a lot easier. We're moving the complexity of higher structures into the background. And as we've seen, significantly narrowing the gap between infinity category theory and one category theory. I think computer formalization is kind of an exciting new mathematical practice. It's a great tool for learning to write proofs, particularly if you're writing proofs in a new formal system. You know, there's something a little bit scary for a professional mathematician to try to write proofs in a different formal framework, but a proof assistant can help a lot. And in fact, we caught an error in our original paper. Sort of a silly error. It was just an error of circular reasoning, but we caught it through the process of formalization and fixed it. Wonderful. And this is definitely an area that folks can contribute to. So formalization, you know, it's, there's some mechanisms, some computer tools that make it easy to collaborate, even across distances of, you know,
Starting point is 02:44:10 sort of geography. So if folks are interested, please get in touch. And there's sort of more work that needs to be done with the Simplicial type theory to really make it a powerful enough foundation to redo all of infinity category theory. That's sort of work in progress, but I'll leave some references and that's all. Thank you so much. And where can people find out more about you and the projects that you're working on? Right. So I'm easy to find, I guess, by Google. But for these projects specifically, you know, a good way to get in touch is via Zulip. So the The Lean theorem prover community has a very active Zulip chat. And also we have a sort of much smaller, you know, a lot fewer people use the proof
Starting point is 02:45:03 assistant at risk than use lean, but we have a Zulip chat as well. And that's a great way to get in touch, not just with me, but with everyone who's involved in these projects. So, yeah. Thank you so much for putting this all together. Great. Sorry that went a lot. No, not at all.
Starting point is 02:45:22 I've received several messages, emails, and comments from professors saying that they recommend theories of everything to their students, and that's fantastic. If you're a professor or lecturer, and there's a particular standout episode that your students can benefit from, please do share. And as always, feel free to contact me. New update started a substack. Writings on there are currently about language and ill-defined concepts, as well as some other mathematical details, much more being written there. This is content that isn't anywhere else. It's not on theories of everything. It's not on Patreon. Also, full transcripts will be placed there at some point in the future.
Starting point is 02:45:57 Several people ask me, hey, Kurt, you've spoken to so many people in the fields of theoretical physics, philosophy, and consciousness. What are your thoughts? While I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics. Also, thank you to our partner, The Economist. Firstly, thank you for watching, thank you for listening. If you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself.
Starting point is 02:46:35 Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., It shows YouTube, hey, people are talking about this content outside of YouTube, which in turn greatly aids the distribution on YouTube. Thirdly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms.
Starting point is 02:47:02 All you have to do is type in theories of everything and you'll find it. Personally, I gain from rewatching lectures and podcasts. I also read in the comments that, hey, toll listeners also gain from replaying. So how about instead you re-listen on those platforms like iTunes, Spotify, Google Podcasts? whichever podcast catcher you use. And finally, if you'd like to support more conversations like this, more content like this, then do consider visiting patreon.com slash kirtjai Mungle
Starting point is 02:47:28 and donating with whatever you like. There's also PayPal, there's also crypto, there's also just joining on YouTube. Again, keep in mind it's support from the sponsors and you that allow me to work on tow full time. You also get early access to add free episodes, whether it's audio or video. It's audio in the case of Patreon,
Starting point is 02:47:46 on video in the case of YouTube. For instance, this episode that you're listening to right now was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough. Thank you so much. The economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how the impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science. They analyze culture. They analyze finance, economics, business, international affairs across every region. I'm particularly liking their new
Starting point is 02:48:40 insider feature. It was just launched this month. It gives you, it gives me a front row access to the economist's internal editorial debates, where senior editors argue through the news with world leaders and policymakers and twice weekly long format shows. Basically, an extremely high-quality podcast. Something else you should know about is that if you go to their app, they not only have daily articles, but they also have long-form podcasts with their editors and writers. This is also available online. Whether it's scientific innovation or shifting global politics, the Economist provides comprehensive coverage beyond headlines. As a toll listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
Starting point is 02:49:29 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.