CppCast - Transducers

Episode Date: December 23, 2015

Rob and Jason are joined by Juan Pedro Bolivar Puente to discuss Transducers and the Atria library. Juanpe is a Spanish software engineer currently based in Berlin, Germany. Since 2011 he has ...worked for Ableton, where he has helped building novel musical platforms like Push and Live and where he coordinates the "Open Source Guild" helping the adoption and contribution to FLOSS. He is most experienced in C++ and Python and likes tinkering with languages like Haskell or Clojure. He is an advocate for "modern C++" and pushes for adoption of declarative and functional paradigms in the programming mainstream. He is also an open source activist and maintainer of a couple of official GNU packages like Psychosynth which introduces new realtime audio processing techniques leveraging the newest C++ standards. News Going Large Scale with C++ Part 1 Support for Android CMake projects in Visual Studio Juan Pedro Bolivar Puente Juan's website Links CppCon 2015: Juan Pedro Bolívar Puente “Transducers: from Clojure to C++" Atria on GitHub psychosynth Embracing Conway's law Victor Laskin's Blog: C++14 Transducers

Transcript
Discussion (0)
Starting point is 00:00:00 This episode of CppCast is sponsored by Undo Software. Debugging C++ is hard, which is why Undo Software's technology has proven to reduce debugging time by up to two-thirds. Memory corruptions, resource leaks, race conditions, and logic errors can now be fixed quickly and easily. So visit undo-software.com to find out how its next-generation debugging technology can help you find and fix your bugs in minutes, not weeks. Episode 39 of CppCast with guest Juan Pedro Bolivar Puente, recorded December 23, 2015. In this episode, we discuss developing large-scale C++ applications.
Starting point is 00:00:54 And we'll talk to Juan Pedro Bolivar Puente from Ableton. Juan will tell us all about transducers and the open source Atria library. Welcome to episode 39 of CppCast, the only podcast for C++ developers by C++ developers. I'm your host, Rob Irving, joined by my co-host, Jason Turner. Jason, how are you doing today? Doing all right, Rob. Looking forward to Christmas. How about you? Yeah, looking forward to the holidays. We're recording this just the day before Christmas Eve. I haven't decided when we should publish this episode, actually. It would be kind of silly
Starting point is 00:01:46 to release it on Christmas, I think. Well, it could be the Merry Christmas episode. I don't know. Yeah, but who's going to listen to that? Maybe release it sometime next week in between Christmas and New Year's. I'll have to see. Well, anyway,
Starting point is 00:02:02 at the top of every episode, I like to read a piece of feedback. This week, I like to read a piece of feedback. This week, I've actually gotten several emails from this listener who I think just discovered the show recently. C.W. Holman, I think he sent me about four or five emails. This one, he says, thanks for the breadth of content. Having the news snippets is even more broadening. Having the news at the start is useful in cases where the main topic is of limited use so the news can be followed before skipping out so thanks for listening i'm glad you enjoy the news uh i hope you're not skipping out on the actual content of too many episodes i think they're all uh definitely worth listening to anyway though we'd love to hear your thoughts about the show you can always reach out
Starting point is 00:02:38 to us on facebook twitter or email us at feedback at cppcast.com. And don't forget to leave us those reviews on iTunes. So joining us today is Juan Pedro Bolivar Puente. And I believe we can just call him Juanpe for short. Juanpe is a Spanish software engineer currently based in Berlin, Germany. Since 2011, he has worked for Ableton, where he has helped build novel musical platforms like Push and Live,
Starting point is 00:03:03 and where he coordinates the open source guilt, helping the adoption contribution to floss. He is most experienced in C++ and Python and likes tinkering with languages like Haskell or Clojure. He is an advocate for modern C++ and pushes for adoption of declarative and functional paradigms in the programming mainstream. He is also an open source activist
Starting point is 00:03:23 and maintainer of a couple of new packages like PsychoSynth, which introduces new real-time audio processing techniques leveraging the newest C++ standards. Juanpe, welcome to the show. Hi, thank you very much. Yeah, I think you're the first guest we've had who actually maintains an official GNU package. Oh, I'm happy about that. Yeah, do you want to tell us about Psychosynth for just a moment? What does that do? Psychosynth is a synthesizer. It's inspired by
Starting point is 00:03:53 the Reactable. It's a project that started in the Pompeu Fabra in Barcelona. I was amazed by one of their videos. That Reactable thing is like a hardware synthesizer. And I wanted to have something purely software that I could play with. So I made that.
Starting point is 00:04:11 But eventually it became my master thesis project. And I focused more on the technical side of things. Like how to do using C++11. It was like when the standard was not even ready yet. How to do all the engine part, like the multi-threaded communication and all this stuff that is normally quite important for all the application. Okay, so we have a couple news articles to go through
Starting point is 00:04:35 before we start talking about transducers. This first one is part of a two-part series going large-scale with C++, which is an article by John Lackos. And it doesn't get too deep into specific technicals about writing large-scale C++. It's kind of more general.
Starting point is 00:04:55 But I might be interested in reading his book, although apparently it's about 20 years old, so maybe I could use an update. Jason, what did you think about this one? Well, so both parts are up, if you didn't notice so you can actually read part two yet
Starting point is 00:05:09 um i think it it is a it's not very c++ specific like you said but it is a good general just overview of what kind of development challenges you might encounter while growing a project from a small project to a large project and And I actually found it pretty informative. Yeah. Interesting. I know a while back we had listeners asking us to kind of go, have a guest talk about large-scale C++ issues when you get into the realm of having 40 million line code bases.
Starting point is 00:05:41 I wonder if John Lagos might be a good guest for that topic. He might be. He does work that topic. He might be. He does work with large projects, I believe. Yeah, sounds like. Okay. And then the next article is coming from Visual C++ Bog. This one's pretty interesting. We've talked about how they're introducing
Starting point is 00:05:59 Clang into Visual C++ and now they're bringing CMake into C++ into their Android tool chain. So if you want to build and debug Android from Visual Studio, you can now do it using a CMake project. And the interesting thing here also is that they are actually contributing to their own, well, so far they have their own fork of CMake, but they're trying to contribute it back to the main CMake repository, which is pretty cool.
Starting point is 00:06:28 Jason, what did you think about this? I wanted to clarify for our listeners here that when I first read the title, I was expecting this to mean that Visual Studio was now able to directly handle CMake projects. And that's not accurate. This is about fixing CMake so that the output it generates is something better for Visual Studio to handle. Its Visual Studio projects are better somehow. I couldn't find the details on what exactly they needed to fix. though like maybe they will start you know contributing more to cmake making it so you know to start at least that cmake you know produces visual studio projects that are the same way visual studio would produce those projects itself right i don't know uh one page you have any thoughts about this article uh not much about the cmake one actually i i don't really work with visual studio okay but the one about John Lacos, actually, I found quite interesting.
Starting point is 00:07:26 I saw John Lacos' session in CPPCon last year, where he talked about value semantics. He has a very academic approach, but very educative, I think. So his book is definitely a good source for for this but I was missing in the in the article one aspect which is that it seemed to me from the article that he puts a lot of emphasis in code as something that has inherent value and how to like how this how code kind of have these structures that emerge naturally from it.
Starting point is 00:08:07 And there is a very interesting article I read the other day from Andy Wingo that suggested that oftentimes it's the opposite. Like you have a structure in your organization and then code reflects the structure of this organization. So I wish, or maybe like he elaborates more on this in the book, like looking at the other side of it, not like how code grows in itself into forming these very rational structures, but more how can you change your organization and your processes
Starting point is 00:08:39 to make code work in specific ways, or how to adapt your code to work for your organization. Do I make sense? Yeah. Yeah, it was definitely a very interesting read. We should definitely reach out to John Lakers and see if we can get him on the show sometime. All right. Okay, so Juanpe, do you want to give us an overview of transducers yeah so transducers are a abstraction that was first invented in the world
Starting point is 00:09:11 of closure which is a lips dialect implemented on top of the java virtual machine and this abstraction its purpose is that it wants to be able to describe transformations over sequential processes without describing what's the process itself. Now, this sounds very abstract, and in a way it is, but to give you a more concrete example, let's look, for example, at the STL function transform. The STL function transform takes an input collection and an output collection, described as iterators, and then applies a mapping to every element and copies them to the output collection. In a way, you could think that this function couples two aspects. One is the process of copying things to the second collection,
Starting point is 00:10:09 and the other one is the process of, to every element, applying the transformation. If you have a way to just have the second part, the one that says, I apply the transformation, in a way that is composable
Starting point is 00:10:25 so you can compose it to other transformations like filtering or concatenating or, I don't know, joining, sipping, using generators. Then you can compose all the transformations that you could do on something that is a sequence-like process. In this case, copying things from one collection to another that then you can apply to other kind of processes. is a sequence-like process, in this case copying things from one collection to another,
Starting point is 00:10:48 that then you can apply to other kind of processes, like, for example, a stream that comes from the network, for example, which is something that you cannot model with an iterator. Okay. So, as you said, it's a relatively new concept coming from Clojure. Can you tell us a little bit more about the history, like how transducers were started in that language? So transducers were started by Rich Hickey, who was the inventor of Clojure. And their original goal was to abstract actually the way they do CSP. CSP is what's called concurrent sequential processes. And it's an academic term from the 70s, actually, that is now in vogue again,
Starting point is 00:11:33 thanks to the languages Clojure and also Go. What these concurrent sequential processes do is that they allow you to do concurrency without a shared state so you have different tasks running in a concurrent way and then instead of mutating variables in well variables that are protected by a mutex or something like that they just put and pull things from channels right okay uh what which he realized was that these channels are a sequence right because you put things and you take things from them so there is a sequence in time of things that go through the channel and at some point he was writing a library of combinators for this channel. So he could map, filter them, and all this stuff. And he realized, well, I'm rewriting all the code that I already have for vectors.
Starting point is 00:12:33 Why do I have to rewrite all this code? And that's when he invented transducers. To be able to have an abstraction that allows you to write all the combinators that you have in functional languages over sequences in a way that can be applied to anything, even to things that follow a push model instead of just a pull model. Okay. Jason, you look like you're thinking of something.
Starting point is 00:12:59 Just listening. Okay. So what types of sequential processes can transducers be used for? So as I said, one of the simplest sequential processes that can be applied to are on sequential processes on collections, right? So this would be, for example, over a copy. So you're copying things from one collection to another, and you apply the transformation on them. on collections, right? So this would be, for example, over a copy. So you're copying things from one collection to another and you apply the transformation on them. You expand tuples, you zip them, you map them, you filter it,
Starting point is 00:13:34 or you even concatenate elements. For example, if you have a vector of vectors, you can concatenate all of them and produce a flat vector using transducers without creating intermediate copies and all this stuff. This is something simple, but in a way you could say you already have many other libraries to do this. I don't know, range adapters and all this stuff. The interesting thing about transducers is that they can also be applied
Starting point is 00:14:01 to things that follow a push model. So iterator-based processes are pool models because in an iterator, you have to pull elements from the iterator, right? The iterator doesn't move forward by itself. You have to go and say, hey, operator plus plus, hey, asterisk. You know, you have to pull things from it. That's why it's called a pool model. Then you have a push model, which is the model you have in a channel or in C++. We're more familiar with a socket, right?
Starting point is 00:14:34 A socket actually gives you information. In a socket library, you sometimes register a callback or you select on it, but it's the socket that is going to decide when you have information available. Another example is a boost signal. Many of you may have used this library already which allows you... it's basically C++ implementations of what in C sharp it's called delegates, right? Which is an implementation of the observer pattern. Basically it's your register callbacks and they get called eventually when something else emits the event.
Starting point is 00:15:09 You can transform a boost signal and produce a different boost signal using transducers. So I'm trying to really wrap my mind around what transducers are. As Rob already pointed out, I was sitting here staring into the distance. So from the most simple standpoint we've got a source of data be it a stream or an iterator it can be anything that's part of the point is that transducers are abstracting us away from what type of data source we have is that correct exactly that's correct so you go ahead when when you describe a transducer a transducer another of its properties is that it's a value, right?
Starting point is 00:15:45 This is important to say in C++ because in C++ not everything is a value. Like, free functions are not values. Pointer to functions are. So a transducer is a value that you can pass around, and it describes the transformation, like this mapping, blah, blah, blah. But it's not bound to a source of data. This is unlike, for example, in a range adapter, which you have a transform range adapter, and you could say, oh, it's similar, right? Because you have the transform range adapter,
Starting point is 00:16:15 and you combine it with something else, and then you get something that you can adapt further. The difference with the range adapter is that the range adapter actually is only a value when you apply it already to a collection so to pass the adapter around you have to pass also a reference to the collection that you're transforming around right that makes sense and also further there is this distinction that i made about pull and push based sequences where the adapter can only adapt things that can be described as iterators, where the transducer can also transform things that cannot be described as iterators,
Starting point is 00:16:55 but still form a sequence of values that are pushed at some point in time. So a range plus a range adapter could be used as the input data into a transducer, but a range adapter by itself could not be because there's no value to it. Sorry, I didn't hear the... A range adapter itself could not be passed to transducer because there's no value to a range adapter without the data source you were saying exactly and the abstract the abstraction of the range adapter itself cannot be composed without the original data source on the other hand if you have a transducer that describes a transform that comes from one part of your program right like you say transform uh x plus two so this describes a transformation that gives you for every element twice it.
Starting point is 00:17:47 And in another part of a program you have something that says filter, I don't know, elements greater than 0. I'm only interested in stuff that is greater than 0. You can compose these two. Actually using just function composition because this is another important aspect of transducers.
Starting point is 00:18:02 They are functions. So you compose the two of them using compose, and then you get a new transducer. And this transducer can be passed around. It describes now the transformation of multiplying by two and filtering elements greater than zero and can be passed around without still carrying any reference to any collection or any particular source of data.
Starting point is 00:18:27 Okay, so I was just going to ask, how does a transducer differ from composable functions? And so you led into that. So I guess it sounds like transducers may or may not produce one-to-one output, where a composable function pretty much has to produce one-to-one. Actually, to look at the more precise definition of a transducer, a transducer is a function, actually, that takes as an argument a reducing function and returns another reducing function.
Starting point is 00:19:03 This is a bit complicated, actually, to understand without looking at the code. Anyways, the reducing function definition is then the function that you normally pass to reduce in closure or to accumulate in C++. Are you familiar with the accumulate function from the standard library?
Starting point is 00:19:24 Yes. So this accumulate function from the standard library? Yes. So this accumulate function is actually the most general way to iterate over a, or to apply a process over a sequence described by iterators because uh the function that you pass to accumulate basically takes one argument two arguments sorry the first one being the current state and the second one being the next input of the sequence and then it returns a new state and okay which gets passed in the next iteration and so forth so this abstraction of the reducing function is what actually made Rich Hickey realize that if you can do any transformation over a sequence just by using accumulate and passing the appropriate reducing function, in fact it is easy to show that you can implement the standard functions, transform, filter, remove, and so forth,
Starting point is 00:20:27 just using accumulate, then what happens if you have a function that takes the function that you will pass to accumulate and returns another function that you will pass to accumulate? What you are doing, basically, is transforming what you would do to the sequence without really doing it to the sequence okay and that's why in a way you you're actually right when saying like functions are
Starting point is 00:20:53 composable and they are uh and they are basically really the underlying abstraction here it's just like we add basically one level of indirection in a way. Like we say, instead of composing the functions that operate on the values directly, we compose functions that operate on the function that would actually do something to the values or that would do something to the collection. So from a practical standpoint now, what kind of results are you seeing in your C++ code?
Starting point is 00:21:24 Does it take longer to compile? Do you get better run times? from a practical standpoint now, what kind of results are you seeing in your C++ code? Are you, does it take longer to compile? Do you get better run times? Are you looking at, you know, more, more compact,
Starting point is 00:21:33 more, what am I trying to say? Better, better C++ in general. So, so the original motivation actually we had to implement this in c++ was because we were working on uh on a very interactive application and we wanted to see if we could have a more functional approach to to doing interactive applications i mean there are ways that are very stateful, but that are established in doing this kind of application
Starting point is 00:22:09 where you have a data model that exposes a lot of properties that have associated signals, right? That would be the QT model, for example. Right. But the problem with this model is that, for example, it's hard to test, actually. Once you start composing and making a more and more complicated system, you end up needing the whole system to be able to say, okay, if I push this button, this thing should turn red, right?
Starting point is 00:22:41 With a transducer, what we wanted to be able to do is to be able to say, OK, well, I have the bottom clicks. It's a sequence. Right. If you think about it as a transducer that somehow maps the sequence of clicks filters them through their positions to check that it's in the right bounding box of the button and blah blah blah and eventually i something gets the color red now i can describe this transformation without connecting signals to the actual components. And in a unit test, for example, I can just have an input vector describing the mouse clicks and an output vector describing the color changes that I want to see. And I can test the core of my logic
Starting point is 00:23:41 without needing to assemble the whole system. In a way, my logic has become more reusable. So that was the original motivation, actually. And in this way, I think it has succeeded in that. When implementing it, actually, we have seen some other interesting side effects. One of them is that they're very performant, actually, because they are just, as I said, functions that adapt other functions. And in C++, as long as you don't erase the type of the functions,
Starting point is 00:24:11 the compiler ends up inlining everything, which means that basically using transducers is most of the time as efficient as just handwriting the code. So you achieve high efficiency, while at the same time, you have made the system much more modular and much more testable. Okay.
Starting point is 00:24:33 That sounds very interesting because I've definitely been frustrated in the past with how difficult it can be to test the logic and UIs specifically. Yeah. I mean, it's, it's a very, yeah, it's a very,
Starting point is 00:24:46 um, interesting topic actually that, I mean, this is not the only, uh, the only attempt to solve it. Uh, there is another one,
Starting point is 00:24:56 which is called RX or reactive extensions. Um, I think some Microsoft people did an implementation in C plus plus as well. Um, and the underlying idea, I think it's, did an implementation in C++ as well. And the underlying idea, I think, is the same. Treating inputs and treating all the stuff that happens in the real world that normally we treat in our programs in a very ad hoc way, trying to systematize them, think of them as collections, so we can really use the tools that we know and
Starting point is 00:25:26 that are easier to test and are maybe more explicit in some way to operate on these collections, right? Right. If our listeners want to try out like a GUI example using the transducers that we're talking about, is there a good sample project somewhere online where they could test that out? I feel like this is something you really need to kind of get into and see it working. Yeah.
Starting point is 00:25:52 Sadly, I cannot really point to a GUI project that uses it. I mean, we have code that uses it, but it's not open source. There is the library Atria that we open source which has an implementation of transducers so if you want to try it out you can use this implementation it's quite complete and it's thoroughly unit tested so the unit test should also be inspiring on how could you actually use this. I also included in the library an example that shows you how to transform Boost signals. And since Boost signals are very often implemented to,
Starting point is 00:26:36 or sorry, used to implement this more traditional way of writing GUI applications, I think that this could be a very good starting point for someone that wants to try to adapt an existing code base that uses boost signals into something that leverages transducers in some points. I think also, well, RX and like reactive extensions has more examples of user interfaces
Starting point is 00:27:06 written this way. So this could be also a starting point for someone looking at this more functional approach to UI. I think that, well, in the Clojure community, on the other hand, there are examples more towards the web but you have to be able to swallow parenthesis if you want to look into that Well since you mentioned your open source library
Starting point is 00:27:35 Atria, it looks like your transducers implementation is just a small part of the library what else does the library have? So the library has different modules. One of them is focused also on BoostVariant, and it tries to provide a nicer syntax for working with BoostVariant, which is an excellent library, I mean, BoostVariant.
Starting point is 00:28:01 But it's a bit dated. It predates C++11, so the way you use it, I think it's not so dated it predates C++11 so the way you use it I think it's not so well adapted to it and with this what it allows you is basically to pass lambdas to decompose the variant instead of having to write
Starting point is 00:28:16 a functor manually there is also a implementation of Eric Nibler's concept checking technique manually. There is also an implementation of Eric Nibbler's concept checking technique for C++ 11. It's a bit more lightweight than
Starting point is 00:28:34 Eric Nibbler's. It was very pragmatic. We just did it because we wanted to use it through the library. And Transducers is maybe one of the biggest modules in it there is another module that might be interesting to those uh wanting to look into more functional approach to doing data models for interactive applications which is called funken um this module on the other hand
Starting point is 00:29:00 is experimental uh so it's more like a bag of ideas that we were trying at specific moments but it might be quite interesting because this one again takes concepts that might be a bit alien to the C++ community. In this case
Starting point is 00:29:19 it takes concepts from Haskell and specifically the concept of lenses. And basically, it's a combination of lenses and transducers to be able to say, okay, my data model, instead of being a mutable collection of objects with lots of boost signals in it, it's going to be composed only of planar data structs. And I'm going to have it in a variable that I call the state. And with transducers, I'm going to make projections of it that I can observe.
Starting point is 00:30:02 So instead of needing to put signals everywhere, you can use plain old data that is very easy to copy around. You can pass to other threads to transform. And to make it observable and be able to connect it to a query framework in the end, you use
Starting point is 00:30:21 transducers. Interesting. I want to interrupt this discussion for just a moment to bring you a word from our sponsors. Do you spend half your programming time finding and fixing errors? Is printf your default go-to when you encounter a bug? At Undo Software, they know that debugging C++ can be hard. That is why their next-generation debugging technology for Linux and Android is designed for C++ users and is proven to reduce your debugging time by up to two-thirds. Harness
Starting point is 00:30:51 the reversible debugging capabilities of UndoDB, the reversible debugger for Linux and Android, and step backwards as well as forwards to find the root cause of a bug. Use watchpoints to reverse continue straight to the time a variable was last changed memory corruptions resource leaks race conditions and hard to find bugs can now be solved quickly and easily visit undo-software.com for more information and start fixing bugs in minutes not weeks so since you brought up um eric niebler's work on concepts uh obviously he's also working on ranges, and we had him on the show a couple weeks ago talking about that. Will ranges support coming to C++ change the way transducers are used or can be used?
Starting point is 00:31:37 Well, I think, as I said before, their scope is a bit different. They complement each other well, though, in the sense that ranges are more focused on the pool transformation, pool-based transformations, and transducers are more for pushy things. On the other hand, you can implement a range adapter, actually, that just takes a transducer and converts the transducer into a range adapter. So I would like to try that, actually, once the proposal is more stable.
Starting point is 00:32:21 I have already done something similar to a range adapter, but using what's available in C++11 in the current implementation of the library it's called sequence, so it takes basically an iterator and a transducer and it returns an adapted version of the range of the underlying iterator and I think there can be a lot of feedback
Starting point is 00:32:47 between the two libraries. I wish I'd have a chance to meet Eric Niebler maybe next CppCon or some other event to have a chat further on how could parts of the range library be implemented in terms of transducers and the other way around also, like having transducers leverage better what the ranges library is doing to provide more useful transformations on iterator-based things. Sounds like it could be a very interesting conversation.
Starting point is 00:33:22 Yeah. So going back to Atria, what platforms are currently supported there? So at the moment, the library compiles with recent versions of GCC and Clang. Sadly, it doesn't work on Visual Studio because expression is finite, basically. That's the missing point.
Starting point is 00:33:47 Have you tried Visual Studio 2015 Update 1 yet, since it has some support for expression is finite? I haven't, but I'm looking forward, actually, because I think that would make the library much more useful for a much wider community. So, yeah, I should totally do that. Okay. Well, if you're listening, maybe check it out
Starting point is 00:34:10 and see if it will compile for update one. So what's the future of Atria look like? It's open source. Is it accepting contributions from outside of Ableton? Yes, it's accepting contributions actually i would love to hear of more uh windows people uh and see if they can give me a hand with this task of adapting it to work on visual studio there was someone that actually did some work already in the past, but this person backed that at some point. But yeah, we're looking forward for contributions
Starting point is 00:34:52 and also to listen to people using it in different contexts and see how this could improve the library. At the moment, I would like to focus on polishing the transducers part, making it work on Visual Studio, and probably adapting or upgrading the implementation to just use C++14. At the moment, I kept the implementation working on C++11 as well, but this adds a lot of noise to a few parts
Starting point is 00:35:24 where generic lambdas could make a huge difference in how the code is expressed so so yeah these are the short-term plans in the mid-term probably we would like to explore further how to do the the data model part side of things, right? Like how to provide more functional tools for doing a reactive dynamic application. There's a note on the GitHub page saying the project, you know, is still under active development and the API is not stable yet. Do you have any idea when the API might become more stable?
Starting point is 00:36:07 So, yeah, the comment is about the library in general. The transducers part have, the transducers are already quite stable, I would say. So, yeah, maybe I should update the node
Starting point is 00:36:23 and reference the different modules separately, because not all modules have the same level of stability in their APIs, for sure. Okay. Jason, do you have any other questions? I do not believe I do. Okay. Well, is there anything else you want to share about Atria or your work at Ableton before we let you go, Juanpe? Well, not really.
Starting point is 00:36:49 As I said, it's open source. So I'm looking forward for people to take a look at it and give feedback and see how they can find different interesting use cases for it. I know some people that want to use it for doing audio processing for example at DSP which is I think like a great idea and yeah I still don't have any plans for which C++ conferences
Starting point is 00:37:16 will I attend this year but I hope also I can meet any interesting people and people interested in the library as well in the library as well in the next upcoming events and meet in person, which is where
Starting point is 00:37:30 a lot of the interesting ideas flourish for sure. I know you made it to CppCon 2015. Did you go to meeting C++ as well? That was in your neck of the woods, right? Yeah, it was and I didn't. I feel very sad about it. A few colleagues of mine did and they said it was a great event, actually.
Starting point is 00:37:47 There was someone, actually, Victor, that mentioned transducers in his session. He made his own implementation of them. Oh, wow. So that might be something to take a look at. I think he also made a blog post recently about his implementation of Transducer. So I can send you a link in case you can put it on the podcast description as well.
Starting point is 00:38:19 Sure. That'd be good, yeah. Okay, so where can people find you and find the Atria library online? So Atria can be found on GitHub. So it's on github.com slash ableton slash Atria. And they can find me there. And my email is written down there.
Starting point is 00:38:41 They can contact me or open a pull request or an issue or whatever. And I will be happy to answer their questions for sure. Okay. Well, thank you so much for your time today, Juanpe. Well, thank you very much. It was a pleasure. Thank you. Thanks so much for listening as we chat about C++.
Starting point is 00:38:59 I'd love to hear what you think of the podcast. Please let me know if we're discussing the stuff you're interested in or if you have a suggestion for a topic, I'd love to hear that also. You can email all your thoughts to feedback at cppcast.com. I'd also appreciate if you can follow CppCast on Twitter and like CppCast on Facebook. And of course, you can find all that info and the show notes on the podcast website at cppcast.com. Theme music for this episode is provided by podcastthemes.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.