CppCast - WebAssembly and nxxm

Episode Date: October 12, 2018

Rob and Jason are joined by Damien Buhl to discuss the current state of WebAssembly, nxxm and the belle::vue library. Damien was a Qt on Android Contributor which he presented at Droidcon 2011... in Berlin. He maintains ADAPT_STRUCT and Boost.Fusion. For a long time Damien worked for a 100 year old IoT company and now works on nxxm. He has a passion for C++ and JavaScript. News An Introduction to Torch (PyTorch) C++ front end CppQuiz Android App now available Std::any: How, when and why Damien Buhl @daminetreg Links nxxm belle::vue open source project C++ Everywhere with WebAssembly C++ Everywhere with WebAssembly Slidedeck Calls between Javascript and WebAssembly are finally fast Sponsors Backtrace Hosts @robwirving @lefticus

Transcript
Discussion (0)
Starting point is 00:00:00 Episode 170 of CppCast with guest Damian Bull, recorded October 10th, 2018. This episode of CppCast is sponsored by Backtrace, the turnkey debugging platform that helps you spend less time debugging and more time building. Get to the root cause quickly with detailed information at your fingertips. Start your free trial at backtrace.io slash cppcast. In this episode, we discussed Studeni and PyTorch. Then we talked to Damian Bull. Damian talks to us about the current state of WebAssembly and Nixam. Welcome to episode 170 of CppCast, the first podcast for C++ developers by C++ developers. I'm your host, Rob Berfink, joined by my co-host, Jason Turner. Jason, how are you doing today?
Starting point is 00:01:24 I'm doing all right. Rob, how are you doing? I'm doing okay. Not too much to report. A week or two after CBPCon now. Yeah, I'm fully recovered. I'm recovered. You? Yeah, I think so. I kind of lost my voice almost, which is a little weird for me. But I don't know, some allergies or cold or something. But it's a conference with a thousand people there. The probability of picking up a cold is not low. little weird for me but i i don't know some allergies or cold or something but you know it's a conference with a thousand people there the probability of picking up a cold is not low
Starting point is 00:01:50 okay well top of episode like three piece of feedback uh this week i got a tweet from uh kuba sejdak who i guess was probably at cbp con and he wrote uh cbp cast will never be the same after seeing rob irving in full size and i got a couple comments like this one at the conference so that people were just not expecting me to be as tall as i am actually had one person say that i don't sound tall on the podcast which i didn't quite understand i don't know how you sound taller or not tall well i know you always build a mental picture of somehow like i've seen this with like radio hosts right you're like oh i i have some idea what this person looks like and then when you actually see them on a billboard or something you're like that is not the right person you know how tall are you i'm six five and you can uh in the picture i posted
Starting point is 00:02:44 for last week's episode you can kind of clearly see the height difference between you and i with us next to uh phil nash and anastasia and timur yeah you i mean that's you're nearly a foot taller than i am really yeah i'm looking this up and centimeters just so that are six foot five inches. 195 centimeters. Oh, just under two meters. Yeah. Yeah.
Starting point is 00:03:11 Well, I'm sorry that CPCast will not be the same for you, Kuba. I hope you continue listening to the show. We'd love to hear your thoughts about the show as well. You can always reach out to us on Facebook, Twitter, or email us at feedback at cpcast.com. And don't forget to leave us a review on iTunes. Joining us today is Damian Buhl. Damian was a cute on Android contributor, which he presented at DroidCon 2011 in Berlin.
Starting point is 00:03:36 He maintains AdaptStruct and BoostFusion. For a long time, Damian worked for a 100-year-old IoT company and now works on Nixum. He has a passion for C++ and JavaScript. Damien, welcome to the show. Hi, welcome. I'm really thankful for you to have me on the show. It's my first time doing a podcast, so thank you for bearing with my French accent. I like your introduction that you made of me.
Starting point is 00:04:03 That's really great. Thank you very much. All right. I have a question that's not directly related to your bio, but I did notice that you're from the southeast corner of France, where it looks like the town that you're in actually has a German name, but you're in France, and you're like right there where Germany, France, and Switzerland all kind of meet, right?
Starting point is 00:04:26 Yes, that's the Dreiländer Heck. We say it in our language. It's like the free border region. We have Germany, Switzerland, and France at the same place. And most of the people here just live in France and go work in Switzerland and do their, buy their things in Germany because it's cheaper to buy in Germany, better paid in Switzerland and better to live in France.
Starting point is 00:04:52 But I like the three countries. That sounds like you're cheating. I think the other citizens of France might think that as well, but no, it's really fair and I think every country is providing from the situation. My brother lives in northwestern Switzerland, and I've experienced just how much cheaper
Starting point is 00:05:15 things are in Germany than in that corner of Switzerland. It's crazy how much cheaper things can be. Yeah, the difference is high. That's true. But I think a lot of Swiss people go make their shopping as well in Germany. I'm sorry, go ahead. I am half Swiss and half French. My mother is Swiss. My father is French. So I have both citizenships.
Starting point is 00:05:42 So yeah, that's a nice place to live for me, I live on the border really so that's good Since you do sometimes shop in Germany and stuff, do you speak German also, do you do most of your business in French and in English? I speak German also
Starting point is 00:05:59 and I speak Swiss German also which is a quite different dialect and at work most of the time we speak English but there is and I speak Swiss German also, which is a quite different dialect. And at work, most of the time we speak English, but during the coffee pause and so, there is German as well, yes. Swiss German, yeah. Sounds like an interesting place to live, really.
Starting point is 00:06:22 Yeah, it's a lot of culture at the same place and it's really interesting. I like it. But I really love the US as I discovered in CppCon. Oh, so you were at CppCon. Yes, and it was an awesome moment. And you got to do some travel afterwards, I guess?
Starting point is 00:06:38 I had the chance to rent a big muscle car, like a Dodge Charger V8 and could drive down to San Francisco stopping in the different places, looking at the redwood trees
Starting point is 00:06:52 I enjoyed Bellevue as well, in Seattle it was really a nice experience, I loved it a lot Wow, that's yeah, that's probably a larger car with more more horsepower than you would normally find in france or switzerland right yes right we have small cars yeah so it was fun then i assume it was a lot of fun to drive it i was just uh like a bit worried about the speed limit
Starting point is 00:07:23 so i couldn't try the engine as i would like to but yeah that's that's normal yeah that's fair that sounds awesome though okay well damien we got a couple news articles to discuss uh feel free to comment any of these then we'll talk more about web assembly and nixon okay thanks. This first article is an introduction to Torch, or PyTorch, C++'s front end. The C++ front end for PyTorch. I'm not really too familiar
Starting point is 00:07:54 with PyTorch. I guess I've heard about it a couple times online, but they just released their version 1.0 and with the release they introduced the C++ front end before, I guess it was only Python. Yeah, so this is a machine learning library as far as I can tell. Now I know most of these things are written in C++ for performance reasons.
Starting point is 00:08:15 So without digging a little bit deeper my gut reaction is this is a C++ frontend to a Python library that's written in C++. Although I'm assuming it's actually the library for the backend, libtorch, is what I understand having onto the website. That makes sense, yeah. So if you
Starting point is 00:08:33 are comfortable with C++, then you can use this instead of the Python front end that maybe a bunch of scientists are more familiar with. I'm sorry, Damon, were you going to say something? I think there was a lightning talk about that at CppCon the first day evening. And I'm not sure if it's the same author as the article as well, but he showed how to use in like five minutes the PyTorch API.
Starting point is 00:08:58 And he was working for Facebook, so far as I can tell. And it was interesting actually to see that now we can finally do machine learning in C++, as you say, with a C++ backend for a Python library. But the problem with machine learning, I think it's learning all the theory that comes with. I have a lot of difficulties with it. I'm trying to learn. I have a friend who is better at that. But there is a lot of knowledge
Starting point is 00:09:31 that you have to get before being able to do any machine learning. It's not just like using an API. It requires a lot of theoretical background that I'm missing for the moment, sadly. A Humble Bundle just had a machine learning bundle. So I have like 14 books on it and i've read like a part of two of them and i still don't know what i'm doing with any of it so i ended up like i started with a like fundamentals book which is all the math
Starting point is 00:09:59 theory i'm like i don't need to know this so now i'm just at the like how will machine learning change the world book, which is more like meant for managers, just to see if I can wrap my mind around it. Yeah, it's a hard thing to learn, I think. Yeah, and like you said, it looks like there's a lot of background in theory. I tried once to make,
Starting point is 00:10:19 for the meeting C++ in Berlin, like for two years, I tried with a friend to make, there was a t-shirt with all the speakers, famous speakers in it. And I tried to make a program which will take photos of the Meeting C++ from Twitter and modify the portraits.
Starting point is 00:10:38 And we tried that hard and we didn't sleep at night and then we were just sick and we didn't get any results. So, yes. That's funny. and we didn't sleep at night and then we were just sick and we didn't get any results. That's funny. Okay. Next article we have is about CBP Quiz, which we
Starting point is 00:10:56 talked about with Anders Knotten like three episodes ago, I think. He just made an announcement that there is now an android app available for cp quiz which i meant to download before the show oh you got yours okay i got it going yep nice how does it work well well i installed it right before we actually did the interview but just enough to look at it to see that like i was i was honestly not expecting a lot because a lot of these like
Starting point is 00:11:25 android apps for a for a website right you know they're not that special but it's it's pretty slick it's got really good like color syntax highlighting and stuff going on and it tells you the difficulty of the question or whatever and lets you answer it and apparently synchronizes with the website so you can even use it when you're offline. Oh, awesome. I'm downright impressed, honestly. Very cool. Isn't there any iOS app, so iPhone application for CppQuiz? He's only mentioned the Android app, but I'm curious as to how it was Sergey Vysolchenko
Starting point is 00:12:02 who implemented the Android app. I'm not sure if he's... Oh, he says it's open source. So if someone wants to go and make it for iOS, the code is already on his GitHub, so you can try to do that. Okay, so it should be something I should try because I just have an iPhone right now. I don't have Android anymore, so...
Starting point is 00:12:22 It appears that it is all written in Java, so you'd have to rewrite it in a cross-platform language or just write it in Objective-C or something. But yeah, the source is available. I'll put the link to that in the show notes too. I think also a good takeaway from this is
Starting point is 00:12:39 it was kind of a little community collaboration here, so Anders also enhanced the website's API so that it was easier to make the Android app. So theoretically, it'd be easier to make another one in the future for a different platform too. Right.
Starting point is 00:12:56 Okay, and then the last article I have is a post from the Visual C++ blog. And they've been doing these, not really Visual Studio-related posts that are kind of just going over newer language features. And in this one, they're talking about std any and kind of going over the use case of it and comparing it with, you know,
Starting point is 00:13:17 what a C developer might do with using, like, Void Star. And I thought it was a pretty good post, good overview of the feature. And it's not something I think we've really talked about before on the show. Right, Jason. Hi, it's maybe been mentioned once or twice,
Starting point is 00:13:31 but yeah, I don't think we've, yeah. I find interesting, uh, student and, uh, student violence as well.
Starting point is 00:13:38 I mean, I think if I mentioned they will make a new article, the next one about violence, but I'm a bit worried that the standard got std any in the form as boost any was, so to say, because at the time where std any came in the standard, at the same time, there was this new nice feature that's a template variable.
Starting point is 00:14:00 And I think nowadays there will be a way better system to make any type or to make a variant that don't expect a type list. It will be by using this template member variable for static data member so that you could actually store in an order map this pointer as a key for the storage of the special type that you could actually store in an unordered map the disk pointer as a key for the storage of the special type that you want to store in the any and the value as another template type in the unordered map.
Starting point is 00:14:36 And I think nowadays there will be a possibility to make like a variant type or like any type that you don't need to specify the list, the type list before, but where you can then visit with the type list. And what I don't like so much about Any is that Any requires you to make a lot of if is this type, then cast it to this type.
Starting point is 00:14:58 And in comparison to variant, where variant you can just pass a visitor to it, or a lambda, and lambda will just be called with the type which is inside it. I think now that C++14 has this static data member template variable,
Starting point is 00:15:14 I think we could do a variant type which has the simplicity and flexibility of any but the advantages of using visitors to access the types. I think it's something that we could do. And it might have many applications for interfacing as well with JavaScript and so on.
Starting point is 00:15:34 So yeah, this was my quote about Any, because I find it a bit too flexible, too like a JavaScript variable. That is true. And I've, I've found only a couple of real use cases for something like that, like in my scripting engine where you truly can put anything in there and you have no way of knowing and you have to determine at runtime what's going on.
Starting point is 00:15:57 But for the most part, like you're saying, if, if you have any way of limiting the set of types that are allowed, and that's like a, it's like a single sentence at the bottom of the article is like, oh, and by the way, if you already know what types can be allowed,
Starting point is 00:16:11 used optional or variant, like. Yeah. There is this grid, uh, over any type, uh, in boost,
Starting point is 00:16:19 uh, versus libraries named the type erasure. And they have an, any type to which you can add constraints, and they have their own required syntax of concepts. So you have an any type that has the function pushback. Then you can store at runtime any type that has a function pushback. And it is codes to run independently
Starting point is 00:16:45 or to be written independently of the type that has pushback in the end. And I hope with the new concept coming in the next episode of Standard that this library gets even better because then we can perhaps directly use the concept syntax for this boost type erasure. I liked it a lot, this library.
Starting point is 00:17:02 But it has a small learning curve with error messages as well, naturally. Well, yeah, I'm sure it does with error messages. I wasn't familiar with that one. Okay. Well, Damien, we first talked about WebAssembly like three years ago with JF Bastion, one of our earlier episodes. And I think at that time it was still just a proposal. Can you tell us a little bit about the current state of WebAssembly to get us started?
Starting point is 00:17:32 Oh, yeah, I would like to do that. I was at the CppCon to give a talk about WebAssembly. So in the first half of my talk, it's not yet online, but when it will be, the first half is about WebAssembly, or it is built in detail and so. But I feel like an imposter in comparison to GF Bastien, who is working on WebAssembly directly and making it happen, as well as people like Kripke and Mozilla. But I'm a user for a long time. I made a lot of customer projects before with IACMJS already. I was always like to my friend, hey, you know, the web, we should build it in C++.
Starting point is 00:18:11 But the friend was like telling, no, you know, it's JavaScript and there is nothing else. And now it's something that happens. WebAssembly is available in a really broad way. I mean, you have all browsers. You have Edge, you have Google Chrome, you have Firefox, you have Safari, who are capable to run WebAssembly. And all other browsers that are not yet capable,
Starting point is 00:18:36 they can run ASM.js. So you can build your application for WebAssembly and have a fallback for the quite old browsers. But WebAssembly in its current state also for the quite old browsers. But WebAssembly in its current state also runs as well on Node.js. And what is really interesting about it is that they have been
Starting point is 00:18:53 working hard in making a minimum viable product that everybody can use. Nowadays, we have a standard WebAssembly binary format and WebAssembly instruction set. And WebAssembly binary format, it's like Portable Executable or ELF. And at this difference, first, it has no legacy, which is great.
Starting point is 00:19:21 And on the other end, it also defines the instruction set. So it's totally portable because the Windows portable executable sadly isn't totally portable because the instructions are either for x86, either for ARM, but you have to have prepared them beforehand. And with the WebAssembly standard
Starting point is 00:19:44 that is now implemented in our browser and available, it's actually possible to load the WebAssembly file in any browser and to have it be compiled for your platform at the moment when it's loaded. And when I mean compiled, it's just translated from the WebAssembly instruction set 1.2.1 to the x86 or ARM or whatever platform your WebAssembly runtime runs on instruction set. And what is really interesting is that the CPU instructions that are generated by the WebAssembly runtime are something which is really near to what you can write in the WebAssembly machine model because you will have all the optimization
Starting point is 00:20:28 you have in your compiler, like O2, O3. You will have the ability to check your WebAssembly instructions, how they look like. And you might make good guesses of how exactly it will be transformed into x86 or into ARM. Naturally, you don't have the full control of if it's using a move or a moveL or so, but you will be really new to know that what you produce for a binary
Starting point is 00:20:59 will be executed exactly in the form that you are used to if you were to compile it for the native platform as well. And this is also interesting for performance, I think. It has a really good impact. Regarding what WebAssembly is still perhaps lacking in this current implementation and proposal, but it's moving fast, is that it doesn't support direct access to garbage-collected reference objects. And this is important for the web. When you are building a web app, you want to access HTML.
Starting point is 00:21:37 And HTML5 is made on the document object model. This document object model is like a representation of all the HTML nodes you have. On this, the browser implementers and all the web standards are defined is that they are garbage collected. So you cannot get an easy handle to this in C++. You have to wrap them in a JavaScript value
Starting point is 00:22:03 and to make a mapping with the JavaScript bridge that WebAssembly offers you, because you have the possibility to import an export function from WebAssembly to the host, and the host is JavaScript currently. But it could be any host, as
Starting point is 00:22:19 it is specified. And so this is coming in the next revision of the standard as well as threads, but threads could be already used if Spectre wouldn't have put his
Starting point is 00:22:35 way in. Because if you use tools like mScripten that we ship in NXXM, for example, you can enable the use of thread because there is a Pthread implementation based on workers and on shard array buffer. But because of Spectre, it only works for the moment in Chrome.
Starting point is 00:22:59 All of our browsers disabled that. Hopefully, they will enable it again. So you can, with WebAssembly as it is released now, you can actually compile any of your C++ applications as long as you don't do multi-threading, I would say.
Starting point is 00:23:15 And as long as you are implementing it in a more asynchronous way and not having a blocking code that is preventing the runtime to run. implementing it in a more asynchronous way and not having a blocking code that is preventing the runtime to run for browser. Okay, so to make sure I understood,
Starting point is 00:23:35 you take your compiler, you compile to WebAssembly, and this is like an abstraction of what an assembly language might be. And then that is directly translated into whatever assembly language or whatever machine code you're actually running on. Yes, exactly. Yeah.
Starting point is 00:23:52 Are there any concerns about like, I mean, like if I, if I did an array out of bounds access or like access past the end of memory or something, what happens? Actually, they,
Starting point is 00:24:02 uh, you are jailed in, um, in, in, in a big array for the WebAssembly host. When they load your WebAssembly module, the WebAssembly module is loaded with a memory, the isEap, which is actually a big array and the WebAssembly host is just not allowing you to write with your with
Starting point is 00:24:28 malloc or so in any other places than inside this array. So you cannot really make you can crash internally and then it's your problem and you have to debug it but you cannot crash the host for this reason. The stack is however not as you are used to.
Starting point is 00:24:47 It is in the WebAssembly engine. The stack is... Some part of the stack might be, because of the C++ implementation, might land in the heap, which is this big byte array that we can access from JavaScript, for example. But mostly it would be managed by the JavaScript engine.
Starting point is 00:25:09 So a WebAssembly function will have a stack in the JavaScript engine. Okay, so now I'm going off, I think, from what anyone would practically want to do, but I'm quite curious. If you are gelled into this giant array, as you said, and you can make JavaScript calls, can my JavaScript code directly poke into that giant array if I wanted to manipulate the direct memory state
Starting point is 00:25:39 of my C++ program from JavaScript? Yes, you can do such kind of thing. Neat. plus plus program from javascript yes you can do such kind of thing neat uh and you can share like like like an image and like a picture but it will be naturally it is it it it has a different uh very different techniques the better one would be to use a shard array buffer that has been disabled by Spectre, but you can generate something in C++ and copy it in one operation to
Starting point is 00:26:14 JavaScript. So it's not live access for the moment with a shard array buffer, but there is a little copy. But this copy will be optimized by the engine in the end, so it won't be a real copy. So for you, it will be the same in terms of performance. That sounds like it could be fun,
Starting point is 00:26:32 but not very practical, really, to directly access the memory of your C++ program. For an image, I think it makes sense for rendering, but you have other options. You can directly access WebGL. You can easily draw in a canvas from WebAssembly. So, yeah, it might be
Starting point is 00:26:52 easier to do it in different ways. Since you mentioned WebGL, and it kind of sounded like the interface into the DOM is not nearly as good as it could be, because we have to jump through these JavaScript wrappers, if I understood correctly.
Starting point is 00:27:08 Yes. Is the WebGL interface also similarly encumbered, or is it like a nice, clean interface from a C++ perspective? It is clean. It is not encumbered in the same way because the OpenGL works a bit differently so that you can ship your OpenGL code to the WebGL engine and it will render what you gave him.
Starting point is 00:27:31 So, yeah, this is not encoded in the same way, yeah. Okay. But you will lose, like, accessibility features and so if you render everything in OpenGL, I'm not sure. This is a... Because it's... For a web page, it's better to render in the DOM for various reasons, like selecting
Starting point is 00:27:48 or organizing for different screen sizes, and so it might be easier to do it with it. Right. I'm mostly focused on the DOM story, because I think if you have made a video game, or if you
Starting point is 00:28:03 made an app which is using OpenGL, like if you made, for example, a cute app and you wanted to render it via OpenGL in the web, there is a lot of work that has been done in this regard, and it's worked pretty well since a long time. So I think what is nowadays missing for WebAssembly is a good binding to the host. And the people at Mozilla and Google and so on that are working on WebAssembly is a good binding to the host. And the people at Mozilla and Google and so on
Starting point is 00:28:28 that are working on WebAssembly, they are working on these changes. And this is known as the garbage collection and the reference type proposal. And so this is in the phase of being implemented. So it will be really soon, I think, that we will be able to use it in the major browser like I think next year
Starting point is 00:28:50 to directly access the DOM without any overhead but even too there is an overhead nowadays this overhead is not that at all big because just today there was a nice article that was posted by the Mozilla people.
Starting point is 00:29:07 The author is Lynn Clark, and she just wrote an article reporting that calls between JavaScript and WebAssembly are finally fast. So in the last Firefox beta, they have improved the way the calls are done. We voted this new specification, and it happens to even be faster for mathematic API like Math Random to access it from Web
Starting point is 00:29:36 Assembly instead of accessing from JavaScript. because the engine doesn't have to figure out that accessing from when accessing the API from WebAssembly that which are the types that are used in JavaScript the types can always change and they are boxed they have to be checked and she explained that actually the C++
Starting point is 00:29:59 can always be inlined with the call to the built-in of the browser so that it never has to check if the type is the right one and that the calls are finally faster calling directly from WebAssembly into the browser. That's awesome. I find it nice. And she's saying, or he's saying, I don't know if it's he or she, that this will be generalized to other browsers soon.
Starting point is 00:30:30 So she might have heard something from other browser vendors. I don't know. Wow. I wanted to interrupt this discussion for just a moment to bring you a word from our sponsors. Backtrace is a debugging platform that improves software quality, reliability, and support by bringing deep introspection and automation throughout the software error lifecycle.
Starting point is 00:30:50 Spend less time debugging and reduce your mean time to resolution by using the first and only platform to combine symbolic debugging, error aggregation, and state analysis. At the time of error, Backtrace jumps into action, capturing detailed dumps of application and environmental state. Backtrace then performs automated analysis on process memory and executable code to classify errors and highlight important signals such as heap corruption, malware, and much more. This data is aggregated and archived in a centralized object store, providing your team a single system to investigate errors across your environments. Join industry leaders like Fastly, Message Systems, and AppNexus that
Starting point is 00:31:25 use Backtrace to modernize their debugging infrastructure. It's free to try, minutes to set up, fully featured with no commitment necessary. Check them out at backtrace.io slash cppcast. Well, maybe we should talk a little bit about some of the announcements you made
Starting point is 00:31:42 during your CPCon talk and the work you're doing at Nixum related to WebAssembly? Oh yeah, which is nice. Actually, we have been working with two colleagues of mine on a project to be able
Starting point is 00:31:56 actually to build web apps in C++ actually to access the DOM and so, because as I said, drawing a canvas and making 3D operations is already possible. What we wanted is make it easier to write an HTML app. And for this, you need the compiler, you need the libraries that you want to access the DOM.
Starting point is 00:32:16 You need the library to pass JSON, and you need to be able to write C++ near to your HTML. And this is what we did at NXXM. It's actually an HTML5 compiler, which is GitHub connected and WebAssembly ready. So the idea is that you can include any repository that is on GitHub that contains C++ code, that it gets downloaded and compiled
Starting point is 00:32:44 for your WebAssembly platform, and that then you can just write C++ directly in line inside HTML. So you open an HTML file and you write script type text slash C++. Of course. And then you just write void main or int main and you make a CO,
Starting point is 00:33:06 so it will be printed to the console thanks to the Emscripten bindings, or you use the Bellevue open source project that I announced at the conference that allows you to access the DOM. So Bellevue is named Bellevue because actually nowadays the web people, they are no more using JavaScript most of the DOM. So Bellevue is named Bellevue because actually nowadays the web people, they are no more using JavaScript most of the time. They are now saying we use TypeScript or similar frameworks
Starting point is 00:33:33 or languages that brings static typing to JavaScript. And I'm like, why would you make a new language for static typing? We have a marvelous language for static typing, which is C++. It is all about having compile time error when you give the wrong type. So why not write websites in C++? And that's what I presented in the talk. The TypeScript, people, they use the Seattle skyline.
Starting point is 00:34:05 And so I thought we should name our framework Bellevue because the conference is happening in Bellevue. And this is what Linux system is bringing you is setting up your machine, whether you are on Windows, macOS or Linux and letting you simply build and write an HTML file, add some C++ to it, link some library, like say I want Boost, like say I want to end the man JSON, whatever, and it will just build
Starting point is 00:34:37 just from this, because it will read your code to understand how to build it. It doesn't need any special build recipe. This is another main goal of NXXM, is reducing the overhead of build files that we have in our projects. Because actually, the people in JavaScript, they have a nice tool. It's named npm, and they have a nice language, but it doesn't require them to write anything about build. So why C++ people should bother with writing CMakeList if we actually could generate it?
Starting point is 00:35:08 Because if you have a main function in some file, then it will be an application. If you don't have a main function, then if it's not used only in an application, then it might be a library. Then if it's a library, you can then use it, upload it to GitHub and use it. And this is what we implemented,
Starting point is 00:35:25 is building by convention or building by reading your code to deduce which are the CMakeList files that should have been written. And for users that don't like it, they can still write CMakeList files. They can just put a marker, say, okay, I don't want to use your automatism. I want to write CMakeList. This is what we've been working sleeplessly last year so now we are on
Starting point is 00:35:52 the first release uh we have been using it before internally for other projects but um now we are going outside and yeah we are working to improve that so if you have a CMakeList file it will use that by default in the current release it should use it but there is a bug that makes it throw but
Starting point is 00:36:17 this will be corrected by next week and if you don't have if you put ignore CMmaxList, then it will just ignore it, yeah, in this case. And it'll just try to guess, basically, on what kind of project it is. Yes, exactly.
Starting point is 00:36:34 Are you, okay, I'm going to say something bold here. Are you claiming to have solved the packaging problem for C++? Actually, I think we will never solve it in C++. It's impossible. I think there is a lot of good tries, but
Starting point is 00:36:52 we named Enix system not a C++ package manager, because we think packages are not a natural way to think in C++. I mean, in Java and C Sharp, you have the idea of assemblies or of package. In C++, for the moment, we don't have modules. When we will have code that declares to be a module,
Starting point is 00:37:14 then we can begin to speak of packages. But most of the time, C++ users, they don't want to pay for what they don't use. So they are used to include just what they need. And I think we try to have a different approach. You want just not a package, which is per se a thing which has a lot more than what you need. You just want to include something.
Starting point is 00:37:38 And we have a feature that we will ship by the end of this year because it had some bugs. But it looks at the includes that you do in your code and it looks with the GitHub Shutter API which repository has this file and which repository is the best ranked. And it selects the best guess that you can override if it's not working. And it just takes this repository because you included it.
Starting point is 00:38:04 But you don't have to say, I want the special package, the complete boost. I just want to use boost-fusion, or I just want to use boost-fusion include that abstract, and we will look at this, take it, download it, compile it, fit it in the sysroot, so that we are in a more finer, granular way to consume over-people code than using packages that someone has to build, actually. So that's what we are building. And also, to clarify, the point is always building WebAssembly files, or asm.js, if you need backward compatibility.
Starting point is 00:38:43 You're not doing this for native builds, or are you also? We are, by default, focused on WebAssembly, but we build NXXM with NXXM. So on Overtools, you can just specify Overtarget. When you start the program, we ship like 200 toolchain files, but there are four that are guaranteed to work. That is the Windows one, the Linux one, the Mac OS one,
Starting point is 00:39:12 and before all, the WebAssembly one, which is the one that we focus on the most, actually. This one is the first citizen in the product. But we would like to, in the future, have like, if you have an HTML file and you are not compiling for WebAssembly, but you are compiling for Android, we would like to make a UI in a browser,
Starting point is 00:39:36 in an embedded browser inside your app so that you can directly have a mobile app based on your WebAssembly app that you built. This would allow progressive apps for mobile to be built more easily. This is something that we are looking into. But this is a future point on the roadmap.
Starting point is 00:39:55 So what are we talking about from a performance perspective, compiling our C++ to WebAssembly? I know you said it's directly translated, but how good is that really? There are two things happening in the current browser
Starting point is 00:40:10 and Node.js implementation. First, when it downloads the WebAssembly file, there will be a first compiler, which is really easy. In V8, the engine that is used in Node, it is named Liftoff, which is... The principle is named by Mozilla, a two-tier compiler, is that actually they load your WebAssembly instruction
Starting point is 00:40:31 and translate them one-to-one to the CPU instruction of the machine. Okay. And then at this moment, you have the performance that you would have had if you had compiled for x86. Okay. Or if you had compiled for ARM86. Okay. Or if you had compiled for ARM. So you have the same. But then what is even better is that you have in a browser,
Starting point is 00:40:51 in this big JavaScript implementation, you have a nice JIT engine, which is really fast. And if they have a CPU core, which is free, which is available, they will then optimize the build again with the knowledge that the JIT has. So this is something I didn't measure now, but I bet you might have, for some algorithm, better performances than if you would directly
Starting point is 00:41:19 only compile it for the native platform. Because the JIT engine will even optimize more with the runtime knowledge of the value that your compiler didn't know before. Because these are things that I saw in Java
Starting point is 00:41:32 in the past is that for some algorithm, Java was faster than C++ because they knew that the input value will never change for a loop. But this is a C++ compiler.
Starting point is 00:41:45 If the variable is given at runtime, like via a parameter on the command line, this C++ compiler cannot know it. So it can do its best to optimize it, but it won't be that faster. And there we have the best of two worlds, actually. So I think this is something that has to be measured, and I would like to make a blog post about it when I know it,
Starting point is 00:42:05 but I think it's even better than if you would compile it only for the native platform. If we look only at speed, if you look at memory, naturally you will have more memory because you have a WebAssembly runtime there, and you have a compiler, like a translator of WebAssembly to your current execution set. So this is a trade-off that you have.
Starting point is 00:42:25 But if you look at the different JavaScript implementation, they are nowadays not that big. I mean, in most cases, you can include them, and there are even, like, small JavaScript implementation for microcontrollers. So I think WebAssembly implementation for microcontroller shouldn't be as big as well and should be doable, but this is not yet
Starting point is 00:42:50 available. You know, having started my CS career effectively in like 1996, when it was the height of the beginning of Java, and this was the promise of Java everywhere and Java and all the embedded devices and little sticks that could run Java bytecode and all of these things.
Starting point is 00:43:09 That kind of took off. I mean, sure, our Android devices use Java, but I don't know, it feels like this whole system, this Java ecosystem was designed to be this, and I don't think it ever really did what the original creators wanted it to do. And now we have JavaScript, which is in no way related to Java, that kind of accidentally became the de facto operating system of the internet, and now it's
Starting point is 00:43:35 better performance, theoretically, because we can take advantage of the last two decades of optimizing C++ compilers on top of all of this other knowledge. I don't know. It seems like a bizarre, bizarro land or something at the moment.
Starting point is 00:43:52 Yes, for sure. It's really strange. But I think the WebAssembly runtime are really simple and they just can take advantage of this optimization. But as I said, the first moment when you load the file in this two-tier compiler,
Starting point is 00:44:07 you have a translation one-to-one of your instruction, and they might be changed if it can get faster, but they won't change if it can't get faster. So it's a really static world, so to say. WebAssembly is really static in comparison to Java bytecode. It's a bitcode which is really... It's an instruction set which is really an instruction set which is meant to be transformed to a native instruction set.
Starting point is 00:44:28 Java bytecode was not really built for this. They wanted it, but it was not the case in a lot of versions of Java. They made some improvements, but the programming language they are using is not thought to target a native
Starting point is 00:44:44 machine. C++ is way more fault for it, so it helps actually having the performance natively with the same model, because you have another input. It's not Java, it's C++. I think this makes a big difference. Yeah. I would like to see a Compiler Explorer mode.
Starting point is 00:45:04 I know that Compiler explorer has some support for Web Assembly and for LLVMIR and stuff, but I would like to see the one where I can see, like, if I were to compile the same snippet directly to ARM versus compiling it to Web Assembly and what the ARM translation looks like, for example, just to play with this directly and be like, oh, okay, I see what it did there. All right, I just, you know, just, I mean, that's like, like, that's the whole point of Compiler Explorer, right?
Starting point is 00:45:29 Is so that we can prove these things to ourselves. I would like to prove it to myself personally, just for educational purposes. In case Matt got both listening. Yeah, I know Matt listens. So, Matt. Get on it. Hopefully, it would be be good but it will require
Starting point is 00:45:46 the help from the browser vendor in this case because if it's for the current browser but if you can, if you tweak a browser you can do that it's something I did the exercise manually but it would be good to have it in a compiler explorer to be able to check it
Starting point is 00:46:01 more often I did it for small examples in my slides, I put one example of just returning 43. Yes. And it happens to be the same in the end, but yeah, it's a simple example, so it would be good to have this in Compile Explorer.
Starting point is 00:46:16 Definitely. I would help if someone needs it. Since you've mentioned Node, is there an option for Node to say, please take this WebAssembly file and spit out to me what you translated it to? Oh, I'm not sure. I don't know yet. Okay.
Starting point is 00:46:36 I don't know. I'm sorry, that's off track from your actual project, but it caught my attention, the interest there. That would be one way to do it then it could be a post processor and the tool chain for compile explorer but anyhow but the code of the V8
Starting point is 00:46:54 engine used inside node is pretty well written and it's pretty clear I spend a lot of time reading it at the beginning of the web assembly thing and so and I think it might be easy to export at the beginning of the WebAssembly thing and so on. I think it might be easy to export the data out of the code if it's not
Starting point is 00:47:09 already implemented. It must be easy to do that. Because the lift of compiler and the others, it's easy to take the buffer out and bring them to some disassembler. It's something we should do, actually. I've written it down. One question I had had is it sounds like you know this is all open sourced work that you had internally and i'm just wondering what kind of web assembly projects have you actually worked on that are like
Starting point is 00:47:37 you know shipping web applications just so i have a sense of what type of things people are doing in web assembly these days versus JavaScript. One thing where WebAssembly made sense in comparison to JavaScript, some people would say, is data logger. So we had a lot of time series. And the problem is that if you send them to JavaScript in a binary format, then the JavaScript code has to transform this binary form in some of these types and then render them in a canvas to present billions of time series in a canvas.
Starting point is 00:48:14 And this took a while, and the user experience was really bad. So what we did is use a WebAssembly web server and use packed C++ structures that are trivially copiable. I show in my talk a small snippet of code to do that. And we just memcopy all the samples in this WebAssembly format, I mean in the C++ binary representation of the strips. Because we are using a WebAssembly format, I mean in the C++ binary representation of the struct,
Starting point is 00:48:46 because we are using a WebAssembly server on a WebAssembly client, which is compiled with the same compiler, but NXXM, then we just can copy-paste all the time series over the wire on a WebSocket connection and using the WebSocket API from Node and via C++. wire on a web socket connection and using the web socket API from Node and
Starting point is 00:49:05 via C++ and then we just throw it to the browser and the browser just static cast it to the types but have the same memory layout because it's compiled with the same compiler same option and it just then
Starting point is 00:49:21 uses the canvas binding to print them out and it is reduced the load time by 40%. And it improved significantly the speed of the app. And we could even display more data than we could before. And this was for a data science problem that the pharma industry has. So it was for to look at data and filter them out
Starting point is 00:49:49 to remove all the noise, but the noise is not always the same in each of their samples. So they cannot have a generic algorithm. They have to look at the data in details easily to then deduce the algorithm, so to say. But they need... It's a human process.
Starting point is 00:50:07 It's for a moment not possible to do it automatically. There is not enough theory about it. So they needed a tool to look at the scan data easily and in a fast way. So it happened to work really great in this case. And for other things, we did just normal websites like application with C++ because they already had a code base in C++.
Starting point is 00:50:31 I did a web frontend to it. It was easier to reuse all the domain model and bind it to some HTML5. This is a... This doesn't have so much impact. I I mean it was a great experience
Starting point is 00:50:48 to code for the web in this way there is a lot that we had to do ourselves because for the moment this binding to the API are not done but this is what we try to do now is trying to make this open source so we are rewriting it in an open source way
Starting point is 00:51:04 because we cannot open source it because it's written for some other people so they don't want this to be open source. So we are rewriting it in an open source way because we cannot open source it because it's written for some other people. So they don't want this to be open source. So we rewrite it differently now, but it's happening. And we will be, by the end of the year, I think we will have a really good status where people can indeed use the framework.
Starting point is 00:51:21 And we welcome if people would help us and have ideas how to do that better. We're really open and we will assist people. Where can people go to check out the open source library? Actually, if you go on nxxm.github.com, if you click on the documentation link, there is the link to the Bellevue open source effort. Yeah, this is that.
Starting point is 00:51:46 And it is on GitHub. It is github.com slash nxxm slash Bellevue. Okay. Is there anything else you wanted to share before we let you go? We have a roadmap. We will do Visual Studio Code integration because for the moment,
Starting point is 00:52:02 it's a colorful, emoji-full, common-line client, the NXXM, and we would like to make it a GUI application, so we will integrate it in Visual Studio Code with some buttons and so to make it easier to use and syntax highlighting. Yeah, we are working on this as well for the end of the year. Sounds great. Thank you so much for your time today, Damien. Thank you for your time too, Jason and Rob.
Starting point is 00:52:29 It was a really nice time to have you there and to discuss together. That's really great. Thank you very much. Yeah. Thank you. Thank you. Thanks so much for listening in as we chat about C++. We'd love to hear what you think of the podcast.
Starting point is 00:52:42 Please let us know if we're discussing the stuff you're interested in, or if you have a suggestion for a topic, we'd love to hear what you think of the podcast. Please let us know if we're discussing the stuff you're interested in, or if you have a suggestion for a topic, we'd love to hear about that too. You can email all your thoughts to feedback at cppcast.com. We'd also appreciate if you can like CppCast on Facebook and follow CppCast on Twitter. You can also follow me at Rob W. Irving and Jason at Lefticus on Twitter. We'd also like to thank all our patrons who help support the show through Patreon. If you'd like to support us on Patreon, you can do so at patreon.com slash cppcast.
Starting point is 00:53:11 And of course, you can find all that info and the show notes on the podcast website at cppcast.com. Theme music for this episode is provided by podcastthemes.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.