CoRecursive: Coding Stories - Tech Talk: Domain Driven Design meets Functional Programming

Episode Date: January 22, 2018

Tech Talks are in-depth technical discussions. In object oriented languages, modeling a complex problem domain is a well understood process.  Books like Domain Driven Design contain techniques for br...eaking down a problem domain and earlier books like the gang of four book catalogue design patterns for modeling these domains in an object oriented way. In today’s interview Debashish Ghosh explains how to model a complex problem domain in a functional paradigm.  His solution focuses on modelling the behaviour of the software system rather than nouns it will contain.  He also focuses on an algebraic approach to api design and discusses how abstract algebra provides tools for building better software. Episode Page Episode Transcript “I first come up with what I call the algebra of the behaviors. The algebra of the behaviors refers to the basic contract, which the behavior is supposed to support, which the behavior is supposed to honor. So that's the algebra.” -Debashish Ghosh Links: Debashish's Book

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Code Recursive, where we share the stories and the people behind the code. Today's episode is an FP interview. It's focused on functional programming. I'm going to be talking to Tabashish Ghosh. The question I wanted to ask him was, how do you build complex software in a functional programming style? You know, like a lot of things are just little tiny examples of functional programming, like Fibonacci. But how do you build some big, complex, larger pieces of software? And he has a great answer for this. This episode assumes a little bit of knowledge about functional programming, but even if you're not familiar with it, I think it's still an entertaining episode. Enjoy. Dabashish is the author of Functional and Reactive Domain Modeling, a book by Manning, and also works for LifeBend. Dibash, welcome to Co-Recursive.
Starting point is 00:00:48 Thank you. Thanks a lot. So I have your book in front of me, really enjoying it. It has a lot of terminology even just in the title, so I thought maybe we'd start with some definitions. What is domain modeling? Yeah, that's one of the very common questions which I hear, because domain modeling is not what most people think it is. I would like to emphasize that when we are talking about domain modeling, it's really about the problem domain consists of the domain model. It has nothing to do with the solution domain. So as a software architect, it's our job to model the problem domain
Starting point is 00:01:33 into a solution architecture. So also if you look at the definition that Wikipedia has on domain modeling, it refers to the problem domain. It focuses on the problem domain. So that's the moot point of domain modeling, it refers to the problem domain. It focuses on the problem domain. So that's the moot point of domain modeling. You need to interact, you need to understand the problem domain and model the behaviors of those domains and how the various actors or various objects, various entities interact amongst themselves in order to achieve a specific use case. A lot of this terminology, I think, comes out of domain-driven design.
Starting point is 00:02:08 Could you give a summary of domain-driven design before we dig into functional aspects of it? Yeah, actually, domain-driven design also focuses mostly on the domain model. It relies on the idea that the domain model is the core thing, is the core thing which you need to abstract well in order to have your system up and running and ensure that your system is reliable, your system is maintainable, your system is modularized. on in this entire book on domain-driven design is how to come up with a solution architecture for a problem domain that is modularized, that is robust, and that's reliable to all the exceptions. So the core concept is that, how to abstract the various aspects of the domain model in order to make it more reliable and modularized. So your book, I think, is taking where he left off and taking his ideas and applying it to maybe a functional
Starting point is 00:03:14 programming paradigm. So what made you want to explore that kind of intersection of these concepts? Yeah, actually, it was sort of an experiment for me. Because at one point in time, I was working extensively with Java, and I was working on a domain model, which was a very complicated domain model for the financial security system. In fact, if you look at my book, most of the examples are from that domain only. And when I was really architecting that system, as a as one of the team members, I was trying to modularize the system. I was trying to find out how to come up with the best abstraction for the system. And incidentally, at that point in time, I was using Java.
Starting point is 00:03:57 And I was not very familiar with the concepts of functional programming. But ultimately, the system got implemented. The system got implemented, the system got deployed. And in fact, since the last 10 years or so, it's still running. So from that point of view, I would say that it was a successful deployment, successful endeavor. But later in my life, when I found Scala, and I got to know more about the aspects of functional programming, I thought that any domain model, any non-trivial domain model can be modeled in a
Starting point is 00:04:32 better way if we apply the principles of functional programming. And this was a passing thought at that point in time. And the more I learned about functional programming, the more I delved into the details of libraries like Scala Z and CATS and things like that, I was almost confident that, yeah, functional programming has some features. It addresses some of the core issues of modularity, which will ultimately lead to better reliability and better modularization of a non-trivial domain model. So that was really the start. So after that, I went back to some of the basic papers of functional programming, especially why functional programming matters by John Hughes. And ultimately, I thought that I should give it a shot.
Starting point is 00:05:23 And then in my next project, incidentally, I was working on a similar kind of domain, and I tried my experiment. I tried to apply the principles of functional programming, and the result was great. So that led, one thing led to the other. And so here we are. I'm now almost a confident guy that functional programming will work on domain models which are fairly complicated and which are fairly detailed. You mentioned, this is just an aside, why functional programming matters. That's the paper that kind of goes through a fold, right? Where it does recursion and then it abstracts out the higher order functions until it has a fold. Yeah, actually, that's one of the basic papers on functional programming that John Hughes wrote, I think, around 1990.
Starting point is 00:06:16 And it focuses on the modularity aspect. It focuses on the laziness part of it and how you can compose programs, compose larger programs out of smaller ones. Yeah, Fold is one of the examples. But besides that, what it focuses on is that functional programming is basically programming with pure values. And when you have pure values, you don't have assignments. And those values really turn into expressions. So functional programming is also known as expression-oriented programming, where you compose smaller expressions, smaller abstractions to build larger ones. And that's one of the foundational principles which I also followed when I tried to apply the principles of functional programming to domain modeling.
Starting point is 00:06:59 And so an expression as compared to a statement, I'm assuming, right? Where an expression is like two plus two equals four and a statement is like a print line? Exactly. An expression is something that's pure value. It doesn't have any side effect, whereas a statement or an assignment is one which has a side effect. Okay. so if you're doing classic domain-driven design in Java or some object-oriented language, right, I'm going to kind of come up with a list of nouns, and I'm going to make those classes. So how does that differ if I'm taking this, you know,
Starting point is 00:07:40 functional expression-oriented path? Yeah, I actually follow a practice where I start from a specific use case. So you have a specific use case. If you were using Java or if you were using object oriented principles, in that case, you would start with the nouns, right? What I usually do when I start with the use cases, I usually start with the domain behaviors. I first come up with what I call the algebra of the behaviors.
Starting point is 00:08:11 The algebra of the behaviors refer to the basic contract which the behavior is supposed to support, which the behavior is supposed to honor. So that's the algebra. It has nothing to do with the implementation of it. And once you have the behaviors, the algebra of the behaviors of a specific use case, you can now think of modularizing them. The related behaviors go into one module. And usually every functional programming language has support for modules. For example, in Scala, we have traits. So you can use traits in order to modularize your behaviors. So the first step is come up with the algebra of the behaviors. The next step is refine the algebra if required. Third step is modularize them into modules. And the next step is to think
Starting point is 00:09:01 of the compositionality aspects, because those behaviors are not standalone ones, right? Those behaviors need to compose. Those behaviors need to be composed semantically in order to come up with larger behaviors. And how do you do this composition? There are multiple semantics of compositionality. For example, if one use case has four steps, you may be able to do all those four steps in parallel, or you may have to do them sequentially. So all of these lead you to different algebras of compositionality. In the first case, when you can do everything in parallel, you can go for the applicative model of compositionality.
Starting point is 00:09:43 You can use applicatives in order to execute things in parallel. While if you have a strictly sequential compositionality mode, in that case, you need to go for monads. You need to go for monadic compositionality. So I have given several talks, and one of my talks is also coming up on, actually, it's on domain-driven design, DDD Europe, which is coming early next month. And I'm going to speak about the same thing, how to start with the algebra of a use case and how to define the compositionality without knowing anything about the implementation of each of these functions. The algebra themselves will define the compositionality semantics for you. Going through your book, one of the things I appreciated was
Starting point is 00:10:25 how far you take things without actually doing an implementation of the function. But rewinding a little bit, so you use this term algebra, and I think that coming to terms with what that means is, at least for me, was a little bit tricky and maybe for our listeners. So let's define an algebra. An algebra, like an example I think you had was like the natural numbers under an addition operation. Is that an algebra? I would go for a slightly more basic one. Suppose I consider a set, a set of objects. When I say a set of objects, I don't specify what type of objects is that. And I can actually define an algebraic structure based on a set. But this definition nowhere states what type of objects the set contains.
Starting point is 00:11:14 And I can define operations on this set without going into the details of the implementation of the type of object. So this is the definition of the algebra of sets. So one of the specializations of this definition, one of the specializations of this algebra is to define a set of integers. So if I can define my behavior at the abstract level of a set, in that case, I'm doing an algebraic programming. It's becoming much more generic. If I take an example, consider the definition of a monoid. If we look at the contract of a monoid, it's completely generic. It's completely parametric on the type of the type which it encodes. We call it, say, A. Let's define a monoid okay so a monoid is an algebra which basically supports two operations one is the identity and one is an associative append operation so any object for which you have support for these two operations form a monoid and this definition is completely generic whenever we define a monoid in, say, Haskell or
Starting point is 00:12:27 Scala, we define it in terms of a parameter type, type parameter. We call it parametric polymorphism. The type monoid, the algebra of monoid is defined in terms of a polymorphic type, say A. We don't have any constraint on what this A has. The only thing which we need to look after is that this monoid will support two operations. One is the identity operation, and the other is an associative Append kind of operation. So we can define a monoid for integer. Then we are specializing the algebra for the integer type in that case suppose we define define a monoid for integer addition in that case the identity is the is the number zero because adding zero to any number gives you the same number and the append operation is
Starting point is 00:13:20 the operation of addition you add two more add two numbers to get one more number. So that's the, and addition is associative. So here I have defined a monoid for the class of integers where we derive from the algebra, generic algebra of a monoid. Not only this,
Starting point is 00:13:41 since I mentioned about the term associative, this is one of the laws of the monoid operations. A monoid, any such algebra, any such generic or parametric algebra, usually is governed by a set of laws. For example, in case of monoid, we have laws for identity operation and for the addition operation or append operation which has to be associated, binary associated. So these are the lawful algebras. A monoid is an example of a lawful algebra.
Starting point is 00:14:14 So my point is that if we can abstract our domain behaviors in terms of these algebras, in that case our behaviors become much more reusable. Our methods become much more reusable. And we can reason about these in terms of the laws which these algebras honor. So in some ways, I think that these algebras like that, so the algebra of
Starting point is 00:14:40 monoid is, you know, can be specified by a trait or a type class and in in another way it's sort of like a like a design pattern that you might see in the object-oriented world do you think like it's a it's a pattern you can pull off the shelf you can see that my my money class fits this pattern so i can extend from from monoid yeah it's interesting's interesting that you mentioned about the term pattern. In fact, I call these algebras the patterns of functional programming. So if you go to the canonical definition of a design pattern, which Christopher Alexander coined, you will see that a pattern is terms of functional programming you have a very clear delineation of this pattern thing and the context thing the pattern is the algebra and the context is one of its implementations so given you mentioned about the money class a money class provides gives you the right context to implement a pattern in terms of the two operations,
Starting point is 00:15:46 which money supports in order for itself to become a monoid. So money can be a monoid, but that's an instance of a pattern. The basic pattern is monoid itself. So I call these algebras, these lawful algebras, the patterns of functional programming. And the interesting thing is, I mean, to me, design patterns like from the Gang of Four book, you have to implement them. If you want to use the decorator pattern like you implemented, whereas if you want to use a monoid, you don't have to implement it, right? You can use a parametric monoid trait and kind of you get this behavior for free. Exactly. Do you agree?
Starting point is 00:16:25 Exactly. In fact, I gave a talk in December at Scala Exchange where this was the theme of the talk, the functional patterns and their implementations. So there actually I mentioned that the patterns which we learned in the Gang of Four book using Java or C++, you need to write lots of boilerplates.
Starting point is 00:16:46 Every time you implement a decorator pattern, you need to write lots of boilerplate code, which needs to be repeated everywhere, repeated for every context. But here you straight away get the algebra as a reusable artifact. So that's the beauty of a functional programming pattern. Okay, so the algebra of a monoid, there's an algebra of a monoid. It has certain things you can call on it, identity, associative, operation. But then you also talk about
Starting point is 00:17:16 kind of the algebra of your problem, I guess, the algebra of your design. How does that differ? Yeah, actually, what I like to say is that when I define the algebra of a of your design how does that differ yeah what i what i like to say is that when i define the algebra algebra of a monoid the algebra of any abstraction consists of the data types the operations it supports and the laws it honors these three are the core things of an algebra so consider if you consider this and this definition is valid, whether you consider a monoid or a specific abstraction for your domain. For example, if you have a domain specific
Starting point is 00:17:55 abstraction, say you are modeling a trade, securities trade. In that case, a securities and abstraction supports a number of operations. And each of those operations take a set of values, return some values. And those operations are bound by some laws. There are various laws which you need to honor in order to execute the trade. There are the laws for computing taxes and fees. Then there are geography-specific laws, etc. So how are laws different than a business rule? Laws are the business rules. The various business rules are laws, and they are part of the algebra of the domain. So when I'm modeling a domain and when I'm saying that I'm defining the algebra of the domain behaviors, I'm defining the types, I'm defining the operation, and I'm defining the various laws which that particular behavior needs to honor. Now, the point is, our idea is to make these laws verifiable, right? Make the algebra verifiable.
Starting point is 00:19:12 Much of this we can do through types. For example, say, genericity or parametric polymorphism gives you a tool to make some of these laws verifiable. For example, you can put constraints on the type parameter. For example, say you are defining a behavior which takes a data type, which takes a type of trade, which takes a type of an account. But this account may have some specific constraint associated with it. For securities trading operations, an account can be of multiple types. It can be a client account, it can be a broker
Starting point is 00:19:51 account, it can be a trading account, it can be a settlement account. But this account which this behavior takes, maybe it has to be of a specific type. So in that case we can use type constraints. We can constrain the parametricity of the type and enforce the law there itself.
Starting point is 00:20:12 The advantage is that we are enforcing the laws statically, and we don't have to write a single line of test for it because we have encoded the laws as part of the type system. This is one of the reasons why I'm much more excited about dependent types, languages like Idris. You can do a lot more with those. But even with Scala, you can go a lot of way. And for those laws which you cannot verify it with your type system, you can do it through your algebraic properties and plugging in a property-based testing suite. So for type constraints,
Starting point is 00:20:51 what kind of type constraints can you do in Scala? Or do you need dependent types? Or do you need what? No, actually, in Scala, say, when you are defining an abstraction, which is parametric on a type T, you can specify that this type T needs to satisfy this constraint. This type T needs to be, it has to be a subtype
Starting point is 00:21:14 of a trading account. So in that case, the compiler will ensure that you cannot pass any other type of account when you are defining the abstraction or when you are implementing the abstraction. So the compiler acts as your tester. The compiler writes the test for you. You don't have to do anything for it. It's funny, you mentioned Idris. I'm actually, I'm doing an interview with Edwin later next week, actually.
Starting point is 00:21:38 So it'll be interesting to talk to him about Idris. I think it would be helpful if we dig into a specific example. So I have your book open. This might be a little challenging over audio, but you have this trait, account service. And account service takes three type parameters, account, amount, and balance. So I think this is what you're describing, right? The actual account type
Starting point is 00:22:05 is actually just a type variable. It's not defined in the implementation of this trait. And then... Yeah, this idea actually stems from the same idea that I was talking about, the theories of algebraic development. When I'm defining a behavior, when I'm defining a trait, a module or a behavior, I have no idea what my account entity will end up with, right? I don't have any idea about the implementation. So that makes sense. So when you're saying, sorry, just to summarize, when you're saying, I start my domain modeling by doing the behaviors, what you mean is you're not even writing out what the, what the account type looks like. You you're starting with this trait where it's totally parametric over,
Starting point is 00:22:50 over the type of account. Exactly. Exactly. Exactly. So I constrain, constrain the implementation when I, when I, when I'm going to write the implementation for the trait.
Starting point is 00:23:01 But initially I start with only the algebra. And as far as the algebra is concerned, I parameterize anything and everything which comes to mind and which might play a role in the domain model. Some of them may go away when I do a refinement of the algebra. But I don't want any of the implementation constraints to creep into my algebra, definition of the algebra. So this makes the trait, it makes the trait sound very abstract, but it's interesting in your example. So you do this trait that's parameterized over account. It has a debit and it has a credit, right? And so these are two methods you can call that take an account and an amount and then return
Starting point is 00:23:43 an account. And the interesting thing is there's no implementation for these. But then you further use that to define a transfer method. Right, right, right. The transfer method doesn't need the implementation of credit and debit, right? Exactly. So that's the point of algebraic development. I have a more meaty example possibly later in the book when I talk about trading systems. And I also repeat that, kind of repeat that same example
Starting point is 00:24:17 in most of my talks where it models a use case of a trading system. It models the, in fact, the various steps a security goes through in order to, starting from the client order till the trade is done. So this entire use case can be modeled using pure algebra and without any constraint of implementation on it. So yeah, it makes a good example.
Starting point is 00:24:42 What's the advantage? I guess, what's the advantage of writing my transfer method without even knowing what an account is? Yeah, the advantage is abstraction. You need to develop at the proper level of abstraction. Because if you pollute your algebra with implementation constraints, then later it becomes difficult to generalize. For example, if I have a complete program based on the algebra for some definition of the program, it can be a single use case also.
Starting point is 00:25:13 In that case, I have lots of flexibility later when I go into the implementation phase. For example, I can define my algebra in the form of a free monad. And then I can have multi... And when I have the free monad. And then I can have multi, and when I have the free monad, it's just a pure data structure. There is no semantics in it.
Starting point is 00:25:31 It's a pure data structure. And then when I define the interpreter for the free monad, I have the flexibility of doing all sorts of implementation constraints there. I can even have multiple interpreters. In fact, that's a very common technique when you use one of the interpreters for testing. For example, in my algebra, for some of the methods, I want to do them non-blocking and it returns a future. Could you explain what a free monad is?
Starting point is 00:26:01 Yeah, a free monad is one of the techniques to separate, decouple abstraction from the implementation. So what you do is you define each of your behavior as an algebraic data type. And then by some magic, you can make each of them monadic. It's difficult to do the details over an audio, but for the timing, let's say that you have some magic, which turns each of those abstract algebraic data types into a monad. So the moment you have the monads, you can compose all of them using a for comprehension kind of syntax. And this way you can define the exact sequence of the use case. You can define a for comprehension, which will define the sequence of your use It's just another algebraic data type you have. And now you can write an interpreter or you can write multiple interpreters for this free monad.
Starting point is 00:27:12 And in the interpreter, you can come up with all sorts of implementation specific constraints that you wish. For example, I may define, I may have a trading process defined as a big monadic structure, and I'm going to define an implementation for it. And my implementation, I can choose to base my implementation based on future. Or I can choose to base my implementation based on CATS IO or based on Mon task, or based on Scala Z task. I have this flexibility. When I go for my implementation and my algebra is completely unaffected, I have a generic algebra which models the entire process. And then I have multiple implementations. So this gives you the flexibility to decouple the algebra from the implementation. So it makes your code much more modular. Does that mean, it seems like the free monad is the ultimate example of what you're
Starting point is 00:28:11 saying here, right? Taking the algebra and the actual implementation totally apart into separate steps. So should we always, why doesn't your book just say always do everything as like a dsl written in a free monad style free monad is one of the only one of the techniques free it has its it has its disadvantages also so another technique is what we know what what we call the tagless final approach if you google for tagless final you will find lots of papers on. That's one other way to decouple your algebra from the implementation. The plus point, the drawback of free monad is that it's not easy to compose multiple free monads. Because when you have a fairly complex domain model, you have multiple use cases, right? And those use cases may again interact with each other.
Starting point is 00:29:09 So in that case, there are situations where you may have to compose multiple free monads. And that's not easy. That, at least in Scala, you need to write quite a bit of boilerplate code in order to compose multiple free monads. Can you compose them after you interpret them? need to write quite a bit of boilerplate code in order to compose multiple free monads. Can you compose them after you interpret them?
Starting point is 00:29:31 That loses the benefit. I want to compose it before interpretation. I want to build a bigger abstraction out of smaller ones before I interpret the entire thing. So the plus point with free monad is that Fremonad is stack safe. You can implement a Fremonad in a completely stack safe way. Your stack will never blow out. And that's a disadvantage with the tagless final approach. Tagless final is not stack safe.
Starting point is 00:29:56 But tagless final are easier to compose. So there are all trade-offs. These are all trade-offs. And you all trade-offs and you need to choose whatever option fits for you. But personally, in recent times, I'm using more of tagless final approach than the free monads because of compositionality. Okay. So I noticed that you, in the past, you wrote a book about DSLs. So I noticed that you in the past, you wrote a book about DSLs.
Starting point is 00:30:27 So I was just curious, how does this relate? It seems like a free monad or tagless final is a way to write a DSL. Yeah, actually true. Actually true. When I wrote the book on DSL, I was not aware of some of these techniques of free monads and tagless final. Maybe they were also not very commonly used, at least on the JVM. So, yeah, today, if I want to write a second edition of that book, I will definitely consider using free monads and tagless final approaches to encode the DSL. In fact, there are some examples in the CATS ecosystem where they have developed DSLs based on free monads and based on tagless final approach.
Starting point is 00:31:15 Interesting. Yeah, I wondered if there was a connection there. So rewinding back, right? You know, we have this account service example where we have this trait and it's it's parameterized over the account and also the amount and the balance. Like basically all the all the nouns are are just type parameters in your example. So we have these methods, let's say. So so far we have debit and credit and transfer. And so debit obviously adds, you know, debit takes money off an account, credit adds money to account, and then transfer just sort of composes the two of them, right? So if I give you two accounts and I say I want to debit this one and credit this one. So this is our example.
Starting point is 00:32:06 Now, let's say that we have this. And because of our business requirements, we actually need some sort of configuration. So we need before we before we debit, we need to look up some value in some sort of configuration. How would that change things? Yeah, actually, once again, there's some algebra for that. In order to inject configurations, there's a reader monad. You can use the algebra of reader monad to do dependency injection in functional programs. I think there are some examples also in this book. Or if you Google for reader monad dependency injection, you will get lots of examples. You can compose that algebraically too. You can inject your dependencies or inject your configuration parameters completely algebraically as part of your algebra.
Starting point is 00:32:59 When you are defining the algebra, you can define what to inject. And the precise implementation will follow as part of your implementation of the trait. So how do you, what's your opinion on using this kind of reader monad versus like using your standard dependency injection, like off the shelf, wires things up when it builds the object? No, actually, I prefer to use the power of the language, whatever comes with the language. And reader monad is one of the nice abstractions which I find. And if I if my language supports seamless implementation of a reader monad, and if it offers, then I usually prefer that in instead of going for some libraries or frameworks, the reader, then you're passing in to the, like when you run the function, you're passing in things where if you use a more traditional dependency injection style,
Starting point is 00:33:52 you usually have some class, right? And you're doing constructor injection and then calling the methods. Right, right. Here, if you are in the functional programming world and using things like reader monad, the beauty of this thing is that all of these things compose because all of them are based on functions. So reader monad is a monad. List is a monad. Option is a monad. So here we have this basic general algebra
Starting point is 00:34:20 of a monad which embraces all of these things. So whatever you do, you have the ability to compose monads in some way or the other. But beware, not all monads are composable. You need monad transformers for those things. But generally, monads compose or applicatives compose, or I should say that functions compose. So that's the basic building block, the compositionality. If we go back to that example and now, so before we had, we credit an account and the return type is a try of account
Starting point is 00:34:55 because the account could be closed or something. So we returned some sort of error status. And now we want to have this reader T. So now we have a reader of try of account. And now you've mentioned monad transformers. Could you kind of expand on how that works? Yeah, actually, you can compose multiple monads using a transformer.
Starting point is 00:35:20 Say you mentioned about reader T, you can use reader T to compose reader with some other monad and the and the result is also a monad so that's the advantage of using monad transformers you can compose composite monads out of multiple simpler ones so you you can so you can structure your program monadically and yet you can use the power of both the monads together. I guess as we add requirements, does that mean, you know, we're going to end up with like, you know, a stack of transformers that's like reader, writer, state, either? Exactly. Exactly. That's the idea. That's the idea. In Scala, after a certain time, it becomes a little more cumbersome because of some lack of type inferencing.
Starting point is 00:36:07 But that's the idea. You can compose multiple monads using monad transformers. And as your requirements increase, you can go on adding stuff, adding elements to the stack. And in the context of this, I will say that there are some alternative techniques also. Since using a basic monad transformer turns out to be a bit of verbose in Scala, there are some additional abstractions which people have come up with. For example, there's this F monad, EFF. If you Google for it, you will find it. There's this F monad which implements one of the recent papers of Oleg where he's talking about some
Starting point is 00:36:57 freer, what he calls them as freer monads, more free monads kind of thing, where you can encode all of your monadic stuff inside one monad, and then you peel off as you need. So compositionality gets a bit better there. But for all practical purposes, I have found that monad transformers come up as quite a handy option. If we have this big stack of monads,
Starting point is 00:37:29 what does that encode? I mean, we're talking here about domain modeling, but what does this represent? Sequentiality, sequential composition. When you have a monadic comprehension, you can execute the various steps in sequence. So one step completes and then the other step can begin. So that's the basic principle of a monadic composition.
Starting point is 00:37:56 And suppose you have multiple monads stacked together. And when you do this sequentiality, you can directly reach to the innermost monad with a single step of comprehension. So that way you reduce verbosity. And without using monad transformers, you need to peel off each layer successively, one by one. So your code verbosity increases. It becomes much more verbose. And after a certain number of elements in the stack, it becomes almost unbearable. And it allows like sort of a factoring out of certain common things
Starting point is 00:38:37 that aren't part of the business domain, I guess, right? Like you don't need to have specific exception handling because you have that try or that either in there. And it will short circuit on its own without having to throw exceptions, for instance. Exactly. The exception, the happy part as well as the exceptional part, both of them are taken care of by these abstractions themselves. You don't have to write specific cases. You don't have to write specific cases. You don't have to write specialized branches of code in order to encode the exceptions. So I guess we have our account service example.
Starting point is 00:39:14 It's parametric over these types. And now our debit and credit, they may return an account, but it's within this transformer stack. So at what point do we start writing implementations of the account or balance or et cetera? Yeah, once you have the algebra defined, once you have the total abstraction defined, then you are satisfied with the algebra. Then you can start writing the implementations.
Starting point is 00:39:44 And the usual technique which I follow is that whenever I write implementation for an abstraction, I keep an eye on the testability part of it. So, for example, suppose I want to have as part of the implementation or suppose let me start from a bit early. Suppose I have, I need a method, I need a function which needs to return a future. So I can have it, I can have this future thing as part of my algebra, right?
Starting point is 00:40:19 The disadvantage is that whenever, if you have future, if you have an abstraction like future as part of your algebra, in that case, when you write tests for this abstraction, you need to you need to have those execution context and you need to define futures. Right. Yeah. If you if you kind of think of future as a monad, which people tend to think of, then it's better to have your algebra defined in terms of a monad instead of a future. The advantage is that in your implementation, you can specialize the monad to a future. And for the testing part of it, you can specialize the monad as an identity monad. So in that way, your test code becomes much more simpler. It becomes much easily testable without any of the engineering or any of the intricacies of having to deal with execution context and futures. But and I think I know the answer, but let's say I have I have some method and right now it it does things.
Starting point is 00:41:28 It uses task, right? It's Monix task. It's actually a moment. And then inside it calls a bunch of things which are all async and return task. But let's say it calls a method on task, like to say gather unordered. Right. Which kind of does them serially. So you're sorry, it does them in parallel. So I mean, that seems to limit me, right? I can't, I can't just have it over some generic type, because I'm actually making some assumptions based on the type it is in the in the service. Exactly, exactly. So that means I'm doing something wrong, i guess is what you're
Starting point is 00:42:06 saying yeah the idea is to keep the algebra as generic as possible because if you have if you have a generic algebra then it's easier to modularize easier to write tests also so in this case i guess because because they can be run out of order they should be like applicative and something should magically run them out of order. Right, right. There are APIs. If you look at the latest cats release, there are APIs where you can run things in parallel
Starting point is 00:42:33 if it's an applicative. There's a parallel type class, I think, which has recently been released. So I should do things in terms of that. And then in my tests, I don't need to actually use an async operation using using things like monics as part of your unit test doesn't make much sense to me so it doesn't make sense to me it's just it's just where i live when i when i open up my id
Starting point is 00:42:58 an interesting something i really liked in your book um which i had heard of before but but hadn't really quite understood is uh phantom types i wonder if you could explain phantom types yeah actually uh phantom types are there for uh they are to satisfy to honor some of the constraints. It's not a business type per se. It doesn't have any business connotation. But the trick is that you can use the power of the type system in order to ensure that invalid abstractions are never instantiated. Okay, do you have an example? Right now, I don't remember any example, and it's difficult in audio. Okay, do you a nice example. where I talk about how to possibly it was a use case for loan approval or something like that,
Starting point is 00:44:26 where it was not possible to pass an illegal state as part of the API. Yeah, I think that's exactly the example. So you have like a loan application process and you have some sort of loan object. And then the key thing, I think, you're introducing a type that doesn't do anything except, you know, enforce in the type system something like implied. That's what I was telling that it doesn't have any business connotation. The types don't have any business implication. It's there just in order to ensure that the user cannot pass anything to the abstraction which is illegal. So the illegal states are by definition inadmissible.
Starting point is 00:45:21 So that's the basic spirit of ph. That's the basic spirit of phantom types, making illegal states unrepresentable. So this example, I'll just describe it because I think it's kind of neat. So let's say you have a loan and it has to go through like two phases of approval. So this loan object goes into approval stage one and it comes out with like disapprove, you know, bit flipped. Right. And then it goes into the second approval stage and you know, there it gets this bit flipped. But the, the problem with that, right. Is like, you never want it to go to the stage two. If it hasn't first hit stage one, I may be butchering people from the book.
Starting point is 00:46:04 Right. So the idea is we add a, we add a type parameter and the type parameter just says like stage one, stage two. For instance, it's like a sealed trait of two different stages. And then you make this loan have this type parameter stage one. And then when you return it from stage one, you actually just return a new object, which has the type parameter stage one. And then when you return it from stage one, you actually just return a new object, which has the type parameter stage two. It's actually the exact same object, right? Right, right, right. Exactly.
Starting point is 00:46:32 But it means that you can never write code. It's almost like a developer ergonomics thing, right? Yes. A developer can never write code that calls the second stage if they have an object that's in the first stage, if they have an object that's in the first stage. Yes. Alternatively, you could do this validation in runtime as part of the business logic. But I thought that doing it through the type system was kind of neat because first it enforces these constraints
Starting point is 00:47:00 during compile time. You don't need to write any tests for this. So kind of neat, I thought, this technique of using phantom types to enforce constraints. Yeah, that's very true, right? So you could have your stage two just checks a flag and says like, stage one wasn't passed. Let's return an error or whatever. But the phantom type makes that impossible, right? You're encoding that if statement actually into the type system. Yeah. Once again, that philosophy that the compiler will be your tester.
Starting point is 00:47:30 The compiler can test your code. You don't need to write anything. I thought it was a great example and a great phrase, make illegal states unrepresentable. Yeah. Actually, this phrase was first used by Jaron Mansky of Jane Street. And I think I acknowledged it also in the book that it's not my terminology. It was first used by Jaron Mansky in a blog post on phantom types. Yeah, it's a quote.
Starting point is 00:48:03 So how about a smart constructor? What's a smart constructor? A smart constructor is supposed to abstract you from some of the... Once again, it enforces the contracts when you are constructing an object. One of the core ideas of domain-driven design is that when you have a domain object, it cannot be an invalid one the the constructor should ultimately spit out a completely validated object and the idea of smart constructor is to is to act as a layer on top of the basic constructor to enforce these constraints. So, yeah.
Starting point is 00:48:46 So, in one sense, if you have an object A, the constructor always gives you an instance of A, but the smart constructor can give you an instance of an option A or an either or something like that, indicating that the construction process may fail also. Because not always you can get a fully validated, fully constructed object,
Starting point is 00:49:12 fully valid object out of your construction process. So instead of throwing exceptions, which are not referentially transparent, a better idea will be to indicate it once again as part of the type system, that I'm returning an auction a which means that it it may have failed so the idea of smart constructor is to is to make this make this claim that uh uh if i if i hand you over a fully constructed domain object then it will be
Starting point is 00:49:39 a valid one so um an example maybe so how would do this? Let's say that you have your your account and you want to create an account and you pass in a money, which is your starting balance. But you want to enforce that that amount can't be negative. How would you do that? Yeah. So that's part of the validation. All these validations will go in the smart constructor. And if any of these validations fail, then you return a different data type, means you return either a disjunction, like either, or you return an option or something like that, or a try also. So the idea is that once again, you cannot publish a domain object,
Starting point is 00:50:20 which is not valid, and you cannot throw an exception because exceptions are not good citizens of functional programming. So the basic idea is this, to make domain objects, to publish only valid domain objects, or indicate to the user that I couldn't construct a domain object out of this, and do this in a referentially transparent way through pure values and not through exceptions because if you did it the if you just in your constructor if you checked the balance and it was negative uh like all you could do is say throw an exception and that's not referentially transparent but if we if we can return a none which we can't
Starting point is 00:51:01 from from just standard construction but in our smart constructor, we could have whatever type possible. So the standard technique is to make your constructors inaccessible to the general user. Instead, publish smart constructors. And this is sort of like a factory, I guess. Yeah.
Starting point is 00:51:20 So what is a Cleasley? Cleasley is one encoding of the reader monad. It's basically a function application, but the abstraction Kleasley gives you a number of combinators which you can compose together. So that's the advantage of using Clizzly over native function application. So, when I say Clizzly option A, B, it actually means that A to option B or something like that.
Starting point is 00:51:54 I forgot the order of this, but it takes an option, it takes an A and gives you an option B. So, it's basically a function which takes an A and gives you an option B. But the moment you declare it as a Clisley, in that case, you have at your disposal lots of combinators which the Clisley abstraction gives you. So in the book, I think there are some examples where I use the combinators on Clisley to design some DS dsl kind of thing which makes the code much more readable and composable i guess yeah because cleasley gives you the uh and then right you can kind of you can kind of chain these things and then apply right at the end yes i mean i guess like circling back to the to the whole main topic i you know, that's one of the key points is, I think, if you're using these algebras, like Cleasley or Amanoi, there exists like combinators and higher order functions that give you functionality that you don't have to write. Is that one of the advantages of this approach? Yeah, that's one of the advantages. And the other advantage is that using algebra-based programming,
Starting point is 00:53:06 there is a clear separation between the construction of your abstraction and the execution of your abstraction. For example, if you use Monix task or if you use Gatsbio, you can build your entire abstraction before you can execute it.
Starting point is 00:53:21 So there's a clear separation between the two phases, which is not so straightforward if you use abstractions like future, you don't have this delineation. Yeah, that separation. I think that makes a lot of sense. Yeah, that's true. So you work at Lightbend. What is it like to work there? What are you working on? Yeah, actually, I'm working on a team called Fast Data Team, where we are developing tool sets for streaming platforms. And very recently, over the last three, four weeks,
Starting point is 00:53:55 I was working on a library on Kafka Streams. And last week, we open sourced it also. So Kafka Streams has a Java API, but those APIs are very painful to work with if you're working with Scala. So we wrote a couple of libraries for Scala libraries and open sourced it. And we are planning to have them integrated with the Kafka community also. So I was mostly working on this
Starting point is 00:54:23 over the last three, four weeks. But generally I'm working on this streaming platform tool set, which we call the Fast Data platform, for which 1.0, the general level, GA is out. Looks quite interesting and looks quite exciting to me. So what is Fast Data? Yeah, Fast Data is a platform where you get things like Spark, Flink, Kafka, etc. built on top of DCOS, Mesos DCOS. We are also planning to add Kubernetes to the equation.
Starting point is 00:55:02 And you can deploy your applications and the entire management of the resources and the monitoring part will be taken care of by the platform. Yeah, I think this has been a great talk. Thank you so much for coming here. I really enjoyed your book. It took me, there was a lot of concepts that took me a little while to understand, but I think it's a great book about
Starting point is 00:55:21 kind of this design patterns of functional programming. Thank you so much for your time. Yeah, I loved talking to you as well. So that was the interview. If you like this episode, do me a huge favor and think about who else might like it and share it with them. For me, sharing a tech podcast that I like just means sharing it in my company's Slack group. There's a off topic channel and I just throw it in there. So if it's the same at your work, yeah, share it out right now. The main thing I'm trying to do is just grow the podcast listenership. So people sharing it, you know, if they like it really helps me out
Starting point is 00:55:54 until next time. Thank you so much for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.