CppCast - AWS Lambda

Episode Date: August 22, 2019

Rob and Jason are joined by Marco Magdy from Amazon. They first discuss Dropbox's announcement of abandoning their C++ mobile platform strategy in favor of Swift and Kotlin. Then Marco goes over what ...AWS Lambda is, what you can do with it and some of the challenges he faced bringing C++ support to AWS Lambda. News The (not so) hidden cost of sharing code between iOS and Android Trip report: July 2019 ISO C++ committee meeting, Cologne, Germany Links Introducing the C++ Lambda Runtime The Design of the C++ Runtime for AWS Lambda C++ implementation of the AWS Lambda runtime Sponsors Backtrace Announcing Visual Studio Extension - Integrated Crash Reporting in 5 Minutes

Transcript
Discussion (0)
Starting point is 00:00:00 Episode 211 of CppCast with guest Marco Magdi, recorded August 21st, 2019. This episode of CppCast is sponsored by Backtrace, the only cross-platform crash reporting solution that automates the manual effort out of debugging. Get the context you need to resolve crashes in one interface for Linux, Windows, mobile, and gaming platforms. Check out their new Visual Studio extension for C++ and claim your free trial at backtrace.io slash cppcast. CppCast is also sponsored by CppCon, the annual week-long face-to-face gathering for the entire C++ community. Come join us in Aurora, Colorado, September talk about mobile code sharing. Then we talk to Marco Magdi from Amazon.
Starting point is 00:01:02 Marco tells us about AWS Lambda and running C++ code by C++ developers. I'm your host, Rob Irving, joined by my co-host, Jason Turner. Jason, how's it going today? I'm doing all right, Rob. Getting ready for all the conferences coming up. How are you doing? I'm doing okay. Yeah, we're four weeks out from CPPCon, I believe.
Starting point is 00:01:49 Maybe a little less than that. Well, yeah, at this point it's slightly less, yeah. Yeah, I got to make sure I'm ready. I don't think I've signed up for all the courses I want to attend to yet. You mean the sessions? The sessions, yeah. I haven't looked at it at all, actually, because I want to be ready to give the talks that I need to give.
Starting point is 00:02:07 And that seems to be a higher priority. You feel like you're getting close to being ready? Sure. It's good. We have stickers for everyone. We do. Yeah. So if you're coming to CPPCon, there's a very good chance you're getting a
Starting point is 00:02:26 cpp cast sticker yes yes that should happen and we need to make sure we get some some more shirts as well we had shirts last year yeah so we'll have some of those to give away what do you think about those koozies should we order koozies again i have absolutely no idea okay i'll have to think about that one. All right. Okay. Well, at the top of your episode, I'd like to read a piece of feedback. And this week, we actually got a message you got from Phil Nash. Right, Jason?
Starting point is 00:02:53 Yes. Yeah, so Phil says, in the latest CBPcast, there was an assertion that the MESR would be the first cross-build system interchange initiative. But that seems to ignore CompilationDB, which most major build systems support, including CLI, which has built-in support for it. I don't remember what exactly it was we said about MESR. We were talking about the module exchange kind of information. That part of the initiative of the standards committee to make sure that there's some sort of collaboration between how modules are compiled and that kind of thing.
Starting point is 00:03:27 Right. So I don't think I was very familiar with the CompilationDB beforehand. Were you? I feel like I've heard of it, but no, I don't really know what it gives us. Yeah. So I did find a link about CLion's CompilationDB support, so I'll put that into the show notes for any listeners who are interested in reading more about it all right yeah so we'd love to hear your thoughts
Starting point is 00:03:49 about the show as well you can always reach out to us on facebook twitter or email us at feedback at cpcast.com and don't forget to leave us a review on itunes or subscribe on youtube joining us today is marco magdi marco is a senior software engineer who's been working at aws for the past four years he's been programming in C++ on and off since 2001. Before joining Amazon, Marco worked at a few smaller companies building scalable web apps using.NET, GWT, and C++. Marco, welcome to the show. Hi. Nice to be here. You very rarely hear people talking about web apps being written in C++.
Starting point is 00:04:30 So that was kind of not entirely the case. It was mostly the web. When I was working on a smaller company, it was the full stack. So you were doing the client app and you were doing the backend, the web backend. So that was the C++ part. So I was doing that as part of it. That's the C++. The C++ was not in the backend, basically, in the web backend. So that was the C++ part. So I was doing that as part of it. That's the C++.
Starting point is 00:04:49 It was not, the C++ was not in the backend basically. And the web, it was on the client side. I feel like I honestly, and since you have some experience in the web development world, I'm going to ask a really mundane question. What does full stack mean? Sure. I think what it means and what people use that term for is you basically are working on the front end, the UI, starting from the actual web page, like the JavaScript, that stuff.
Starting point is 00:05:14 Exactly, like CSS, JavaScript. And then you start building the entire thing all the way basically to the database where you're fetching information. So you talk to an HTTP backend and then you do whatever you need to do on the backend, like authentication, authorization, then the business logic and all that. And then you actually talk to the database and get the information out, do all the, whatever you're using, modeling there, like MVP, MVC. I mean, I'm using terminologies that are not very popular now. And
Starting point is 00:05:46 it shows like my ages, as far as that goes. So yeah, that's basically what full stack means, at least as far as I understand it. Okay, so back in the PHP days, like when I last did any kind of web development, although I never did it professionally, we were all full stack developers, basically, because we were designing the database, writing the PHP script to talk to it, and then that would spit out some HTML to observe the thing. Is that fair? Absolutely correct. Yes.
Starting point is 00:06:13 Okay. So I guess it's more likely these days that you have dedicated front end versus back end developers. So you need the full stack descriptor to identify yourself as being able to do both. I think that's why the term popped out because, yeah, people started getting more specialized as those things, as those parts got more complicated and more involved. People started specializing, oh, I'm a back end, I'm a front end. And therefore people wanted, well, I want somebody who does both.
Starting point is 00:06:41 And then that's what the term full stack means. Okay. want somebody who does both and then that's what the term full stack means okay yeah i recently had a student describe themselves as as full stack and as i understand it they did you know like uh some hardware design operating system drivers that kind of thing and i'm like wait are you writing the web browser and the internet stack too like i don't where does the stack begin and end i'm so confused at this point that's fair yeah that is a lot fuller stack than everybody's okay well marco we got a couple news articles to discuss uh feel free to comment on any of these and we'll start talking more about your work at aw and Lambda. Okay. Sweet. So this first one we have, it was a little sad reading this one.
Starting point is 00:07:29 This is from Dropbox's technical blog, The Not-So-Hidden Cost of Sharing Code Between iOS and Android. And, you know, we had Alex Lane on the show a long time ago, like within our first or second year, I believe. And he talked about how Dropbox does mobile development with c++ and a framework they built there called uh ginny um which is you know able to you know get your cross-platform code able to call into it from both objective c and java but apparently they've completely abandoned all of that effort and they're no longer using c++ for their mobile apps.
Starting point is 00:08:06 They have switched completely to using Swift and Kotlin, which didn't exist years ago when they first started this transition, but are there now and are, I guess, superior alternatives compared to Objective-C and Java. Yeah, that article was really interesting. I had the same surprising and disappointing feelings as well when I read it. I discussed it with a couple of people that I work with. And I feel like I think they missed out on the abstraction.
Starting point is 00:08:41 I think the abstraction was way too high level. And when you do that, you start finding that, okay, well, it doesn't work across platforms. Okay, well, Android does this differently. iOS does that, you know, in a different way. So now we have to come up with an abstraction that hides both details. And maybe that's where you should start stopping and thinking about it. That's my take on it when I read the article. There's other things to be mentioned there
Starting point is 00:09:08 that they mentioned when they started losing C++, you know, experienced engineers, that's when the problem started, you know, dwindling down. So I don't know. I feel like that's yet another part of the problem. It's not C++ that is the problem. It's just that you lost people who know how to work with the language. That could be the other thing, in my opinion.
Starting point is 00:09:32 I've watched Alex's talk in CppCon, I believe, 2014. It's one of the first ones. And the stuff that they showed was pretty cool and pretty impressive. I believe I even made a pull request to the Project Genie that they demoed. It was cool stuff and it seemed to work really great. Something happened there. Yeah, so they do mention in the article how they had a core team of C++ engineers who started on this mobile effort and I guess made
Starting point is 00:10:00 the Genie library, but they apparently lost a lot of those engineers and they talked about how it's really hard to find mobile C++ engineers, which I guess I can understand. It's a pretty specialized niche to be familiar with both iOS, Android, and be a C++ expert. Jason, do you have any other thoughts about this article?
Starting point is 00:10:21 I have a couple, but I feel like I could put on my old man hat and be like, well, back when I was a a kid all development was in c or c plus plus like palm os right you had to know c there was none of this fancy you know java or c sharp or whatever going on there but you know that's just the anyhow um so i don't i don't actually honestly don't understand why it would be hard to find people who are interested in mobile and no C++. Because a lot of people who like C++ like talking directly to hardware, right? I mean, they like thinking about things on a low level.
Starting point is 00:10:54 But that was their experience, and we certainly can't argue with that. I know the Reddit comments really focus on the part that they basically had expertise and they lost it. And then they basically said, now what do we do? I know we'll abandon C++. I don't know if that's a really fair way to look at it either, if they simply couldn't find the people. But I did spend the whole article going, well, a lot of things that you're talking about, like, oh, well, it was hard to maintain the JSON library that we wanted instead of just using, you know, the native, you know, tool that comes with the platform API. And I'm like, well, why didn't you just use like Qt or something that already has all of these tools and already has all of these abstraction layers,
Starting point is 00:11:39 and then you could literally write it once in C++ and use it on all of the platforms. They might even been able to leverage the Windows GUI client from the same code base if they'd gone that road. I will say, you know, I'm not too familiar with Qt's mobile offering, but my guess is that they want, you know, their Android app to look like other Android apps using the most modern controls and their iOS app to be using the most modern controls. And I'm guessing Qt does not give that to you. You know, Qt might be a great alternative for a lot of apps, but Dropbox is a pretty big name. They probably want their app to feel real nice.
Starting point is 00:12:14 Perhaps. Oh, go ahead, Marco. I was going to say that I'm actually not a fan of those one solution for all like Xamarin and Qt for mobile personally. I think C++ should be used for like your common business logic. That's what you want to write in C++ and share. And then when you want to talk to the system or you want to do UI, the top, basically, if you imagine layering,
Starting point is 00:12:42 the middle part should be done in C++. And if the platform allows you to talk to its APIs through C++, i.e. not Android, then definitely do that. But, you know, I think Android blocks people from exposing those kinds of APIs. And again, I'm not a mobile developer, but that's just from me looking at the surface area of that platform. That's what it looks like. Everything's exposed in Java. So it seems like C++ is a great fit for that middle layer. And then everything where you talk to the system itself or the OS should be done in that OS's native offering. And the same goes for the ui above it all right your comments reminded me and now i feel
Starting point is 00:13:26 like i'm just making things up because i don't know exactly what to google for but didn't we have someone that was talking or an article that was talking about like the burrito method of abstraction or something like that salami thank you it sounds related yes yes definitely uh no wonder i couldn't find it just now yeah i believe it was a salami method well one more thing that i found interesting here was you know they talked about some of the c++ libraries they use like the json 11 library and then they have this one line saying you know it's possible we could have done a better job at leveraging open source C++ libraries, but the open source culture in the C++ development community
Starting point is 00:14:08 was or is still not as strong as in the mobile development community. And I just found that really surprising. Like, C++ doesn't have a good open source culture? I don't agree with that. Yeah, I find that surprising as well. I mean, if there's anything about C++, it's like the myriads of libraries. Like, it's actually the problem is too many too many choices. It's the other way around.
Starting point is 00:14:30 I also find it just a little surprising comparing it to the mobile development community. I don't know a lot about open source mobile development, but I've always thought of that as weak. I mean, open source like Ruby and Python libraries, that's like the strongest, right? I can do pip install like anything. Yeah, I don't have enough knowledge about what the C Sharp and Java and Swift mobile development communities are like, but yeah. Well, it looks like we all have like six more programming languages to learn and figure out these things so we can speak more intelligently about them. Yeah.
Starting point is 00:15:06 Okay, moving on, we have another trip report from cologne uh this from one is from uh timor i'm sorry i'm not sure how to pronounce his last name is it ducla doomler doomler i'm sorry yes timor doomler from uh who's from jet brains and he talks about a couple things that we've already talked about over the past few weeks since Clone. But one thing I don't think we mentioned much was the audio library, and he talked about some of the progression that's being made there. We also didn't spend a lot of time talking about the things that were added in thread, multithreading, and synchronization. Yeah.
Starting point is 00:15:42 Oh, and one thing he pointed out here, which I know we didn't mention at all but i'm happy they did was uh for concepts they renamed all of the concepts to go from pascal case to snake case yeah which was definitely a good move i wasn't immediately convinced if it was definitely a good move until i saw the examples well just because i didn't feel familiar with it like i didn't have an opinion right then i saw the examples. Well, just because I didn't feel familiar with it, like I didn't have an opinion, right? Then I saw the examples of each concept, like std copy constructible has a corresponding type straight.
Starting point is 00:16:12 Std is copy constructible. And I'm like, oh, well, this makes perfect sense now to see this like consistency here. I'm totally on board with that. Yeah, I agree. I mean, if it was one Pascal case and then the other one that checks if it is is actually a snake case,
Starting point is 00:16:28 that would have been really ugly. And we would not have heard the end of it from the game industry. Yeah. Anything else either of you want to call out with this trip report? No, I'm just happy. Well, not happy,
Starting point is 00:16:43 but not sad that they dropped contracts because i'd rather have a a nice working feature rather than a rushed one that is just a hodgepodge of all kinds of kitchen sink basically so i'm glad actually that they're able to take hard decisions like that and say okay we're not ready we're not shipping it We'd rather wait and get it right. So some people might be disappointed. I heard even Bjarne himself was disappointed that it didn't make it, but I actually think that this is great. It shows that the committee is doing quality work and they're not just pushing stuff in so that they can say, hey, we're cool too. too we have this so it's good to see hey we're trying to be cool like java we even added j threads different concept actually on the multi-threaded stuff too though like i feel like i've tried so
Starting point is 00:17:40 hard lately to just use like futures to use really high level threading constructs and not think about lower level stuff that when i see that there's now a giant extra library of more lower level threading i'm like but i was trying to not use any of those things already i haven't i have not looked at what exactly was in the trip report as far as multithreading and synchronization goes. I saw that there is a mention of adding semaphores and latches in barriers. Yeah. I have not, I've never used latches before. I've written my own semaphore class.
Starting point is 00:18:17 I have barrier. I don't know why would I use that versus a mutex. I still am not sure. I have not really delved into that or dived deep. I don't even know what a latch is, honestly. If you know what a latch is, please explain it to me right now so that I don't have to read about it.
Starting point is 00:18:31 I am actually not 100% sure either. I think, I mean, to me, I thought a latch was a semaphore, but now that they're being mentioned both, I mean, it seems like they're two different things. Must be different somehow. Let's just rename one of them to Monad and then we can agree that we don't know what the differences are. Perfect. Sorry.
Starting point is 00:18:55 It's conference prep. It's getting to me. It'll be fine. Okay, so Marco, let's start off by just talking about what AWS Lambda is. Can you give us an overview? So AWS Lambda is an AWS service that lets you run your code without worrying about provisioning or managing machines. In AWS terms, those are called instances or EC2 instances. Not only that you don't have to worry about provisioning the machines, but you also don't have to worry about cleaning them up or scaling them up and down. Like if you get more traffic.
Starting point is 00:19:35 So all you got to do is just give your code to AWS Lambda and it will run it. And if you get a lot more invocations for that function or that Lambda function, it will scale for you. It takes care of all that business for you. So those are usually the two things that people have to worry about in DevOps or not DevOps, I should say. The things that people have to worry about when they worry about running their code or scaling their code is like, okay, how do I scale it? Where do I run it? How do I provision?
Starting point is 00:20:02 How much does it cost? And all that stuff. So Lambda basically solves most of those problems by saying, you don't have to worry about provisioning. We'll provision that for you. And when your code is over or your code is done running, then you don't get charged anything. And you're only charged by the amount of time and the amount of memory that your code consumes.
Starting point is 00:20:22 Obviously, there are limits, but yeah. So I say I have this single task that needs to be executed, and I pass it off to you all. Correct, yes. Okay. So imagine it's like the lambda in C++, right? You give it a few parameters, and now you have an object, and then whenever you want to execute that object and pass it those parameters, it will do that job for you.
Starting point is 00:20:49 It's like that, just in the cloud, on a much bigger scale. To the cloud! So what type of code are real users putting in AWS Lambda? What are some of the real use cases for it? Sure. The most common use case that people usually talk about in demos are image processing, for example. If you have an image that you want to do some resizing on it
Starting point is 00:21:16 or do some what have you. I know, like, some people are using the C++ Lambda for image processing for medical purposes. So that's some of the usages. Other things are like, for example, you have a link and you want to do unfurling for a link, like you're writing a chat application, for example, and you want to unfurl a link.
Starting point is 00:21:39 So you want to parse the HTML or something and get some meta information about that link to display it. That's another thing. It's usually those, it's perfect for those isolated tasks where you have, you know, an input and I just want an output out of it. And I need to process that somewhere. People have also used it a lot. It's very common to be used as an HTTP backend. So if you want to write an HTTP application, basically every request you get, you can think of that as, okay, well, I'm getting a request.
Starting point is 00:22:11 Here's the input. I need to, the output is going to eventually be HTML or JSON for that matter. And then you have your JavaScript process it or whatever. But you always have an input and an output and it's perfect for those types of scenarios. What does this kind of latency look like? Can I use this real-time in my application?
Starting point is 00:22:32 Yeah, absolutely. Absolutely, for sure. There are some considerations that you have to keep in mind. There is cold start, for example. So there is a little bit of overhead that you pay as far as latency goes on the first invocation. That varies depending on other factors. But once you do that, then your function is, quote unquote, warm. Then your invocations are as fast as your, I guess, Internet is. And your function, I guess the latency is uh it's as fast as your
Starting point is 00:23:07 function goes and uh obviously the network so and and this is again it goes back to like the c++ lambda uh work that i've i've done here is you get up you minimize that latency like if you have other uh languages there's overhead there's overhead that is usually accompanied with those languages, like GC, or for Java, for example, you have a virtual machine that needs to start and things like that, so there's overhead that you don't have to pay
Starting point is 00:23:35 when you're using C++. Interpreted languages have the overhead of interpreted languages. Okay, so I guess maybe to take just a quick step back, we were talking about the C++ but Lambda didn't originally support C++? Yeah, you just mentioned a few other languages.
Starting point is 00:23:52 What other languages does Lambda support? Lambda supports multiple languages. .NET, Go, Python, JavaScript, Java, JavaScript, yeah, we mentioned JavaScript, Ruby, and then
Starting point is 00:24:08 it also supports what they announced last year is custom runtimes. So they basically we grew tired of supporting all the kinds of languages and everybody has their preferred language. So what's the best way to do it? Okay, well, here's an interface. Talk to this interface
Starting point is 00:24:24 and you can run your own language. Like if you want to run Haskell, you can. You want to run, I don't know, C++, Rust, you know, NIM, D, what have you. So you mentioned JavaScript, so that would imply TypeScript as well. Yeah, yeah, because TypeScript compiles to JavaScript, so you can do that for sure. Right. I just thought, I mean, I don't know why I thought this,
Starting point is 00:24:47 but I thought from a web developer standpoint, even though I'm not a web developer, I could have this orthogonality between TypeScript on the back end, TypeScript on the front end, TypeScript being sent off to Lambda. That'd be pretty cool. Yeah, and I think if I had to guess, I don't have data on this, but if I had to guess, I'd say JavaScript is the most common one or the most common use case I've seen. All right, I'm going to throw out like a really random but specific application idea just to try to wrap my mind around just how exactly this all would work.
Starting point is 00:25:17 Cool. One of the things that I like to do when I am playing with some new language, toolkit, GUI environment, whatever, I like to make a Mandelbrot browser because I think that's fun. So you get to play with some math and you get to play with some UI output. And then I can just use arrow keys or something to change what my center point is on the Mandelbrot set.
Starting point is 00:25:37 And I can zoom in and I can display this thing. Most recent version that I wrote was queuing up a bunch of C++ functions as futures. And, uh, and then when those futures came back, I had image blocks to write to the screen so that I could,
Starting point is 00:25:53 you know, you know, like use all eight cores on my system or whatever. Could I using AWS lamb does say, okay, I'm going to render this, you know, grid inside the Mandelbrot set
Starting point is 00:26:06 and then ship off that information, coordinates with a width and a height, and then get back this chunk of data and then display that to the screen in some reasonably fast turnaround time, it sounds like. So can you do it? Yes. Should you do it? No. All right, great. Perfect.
Starting point is 00:26:26 I think there's no way that a network IO is going to be as efficient as running locally. So I think no matter what, I think your current solution will be definitely more efficient and more fast or smooth. I think going off the network and it has a cost inherent to it, no matter what, and you'll be paying that multiple times for every invocation. So I think, yeah, probably not the best use case. Okay. So I need to think a little bit higher level, bigger chunks of data bigger longer running operations yeah something that would justify the cost of actually going out to the network and hitting you know you're going through routers you're getting dns resolution you're doing there's a lot of things going on when you're talking to the cloud so you got to justify that cost um versus you know you're
Starting point is 00:27:22 running super fast locally on your, between your RAM and your CPU. Like, yeah. Okay. Okay. Was C++ one of the like original languages that Lambda supported or is it added in later? No, it was added in later. It was announced as part of the showcase of the custom runtimes.
Starting point is 00:27:46 And custom runtimes were announced at last re-invent, so last November slash December. So November, December 2018. Oh, so really just only like nine months ago or whatever. Correct. Yeah, it hasn't been a year yet. So yeah, a showcase was announced. It was Rust and C++ showcasing the custom runtime that, hey, you can run Rust or you can run C++ applications now on Lambda. I want to interrupt the discussion for just a moment
Starting point is 00:28:15 to bring you a word from our sponsors. Backtrace is the only cross-platform crash and exception reporting solution that automates all the manual work needed to capture, symbolicate, dedupe, classify, prioritize, and investigate crashes in one interface. Backtrace customers reduce engineering team time spent on figuring out what crashed, why, and whether it even matters by half or more. At the time of error, Backtrace jumps into action,
Starting point is 00:28:38 capturing detailed dumps of app environmental state. It then analyzes process memory and executable code to classify errors and highlight important signals such as heap corruption, malware, and much more. Whether you work on Linux, Windows, mobile, or gaming platforms, Backtrace can take pain out of crash handling. Check out their new Visual Studio extension for C++ developers. Companies like Fastly, Amazon, and Comcast use Backtrace to improve software stability. It's free to try, minutes to set up, with no commitment necessary. Check them out at backtrace. improve software stability it's free to try minutes to set up with no commitment necessary check them out at backtrace.io cppcast so i guess let's maybe dig into it a little bit more so you said you're you're just running code so you take care of the compiling and the running
Starting point is 00:29:18 of that code right um so when it comes to c for example, I'll talk more specifically about that. You're supposed to build your code locally, and there's a packaging script that we ship in the runtime, the C++ runtime that's on GitHub. If you use that packaging script, it will package your binary, compiled binary, and it will ship it to Lambda. So Lambda does not do compilation. It will just run your code. Okay.
Starting point is 00:29:49 And what, I'm actually curious if it's like an executable that gets packaged or like an object file that gets loaded or how does that work? It's an executable, and there's also a bash script with it to execute in a special way. I have a talk in CppCon coming up on exactly the details of why we needed to do that, what are the challenges. There's obviously
Starting point is 00:30:14 challenges with packaging C++ binaries, namely, basically, how do you figure out dependencies? It's the challenge. It's not one of the challenges. It's the challenges. So you have a C++ application,
Starting point is 00:30:31 but that application depends on a lot of things in your system and your application dependencies. Both those two need to be packaged correctly to be able to run on a different system. It could have been a lot easier to say, well, you have to be running on the exact system that Lambda will execute your code on to guarantee that it will work.
Starting point is 00:30:53 But that is very limiting. It's not really a good customer experience. It's not really what we want to ship. So I went through lots of effort to figure out how we can make this work, and I'll speak about that in the CppCon. Okay, so you're giving a CppCon talk. I mean, how much of that are you willing to preview for us? Well, I can talk about, I don't know, I'm thinking here,
Starting point is 00:31:24 how can I talk about it and don't know um i'm thinking here how can i talk about it not spoil it um well you have to think about you have your application dependencies like we talked about and there's also system dependencies when it comes to like glibc for example everything when you write a c++ application you have two dependencies usually uh the c++ runtime which is lib C++ or lib standard C++, depending on which one you're using. And then there's also underlying these two is libc. And if you're using GNU libc, so it's glibc on systems, the common ones are like Ubuntu and Red Hat variants. So how do you package those? Because if you, let's say you build your Lambda on the latest Ubuntu,
Starting point is 00:32:08 and then you try to run it on Lambda's host, which is not running Ubuntu, and it does not have the same glibc version, guess what? Your application is not going to run. glibc is going to complain, saying, well, there's a symbol mismatch version, and this is for safety,
Starting point is 00:32:25 and it will stop the application right there. It's better than running, and then you get like flying dragons later on. So it stops right there. So how do we reconcile that? And there's also the other problem of your application dependencies, sometimes they are linked via RPath or sometimes they're linked via LD library path. Your application finds its dependencies in a way. And that way that it does find it is basically means that those dependencies have to live in those particular locations. Let's say it's installed on slash user slash local slash lib, for example. Well, we can't modify slash user slash local slash lib on Lambda's host. That's not possible. So how do you get around that? So these are the sort of problems that
Starting point is 00:33:19 I had to deal with. But it works now now it's actually the solution is super simple um i think the problems and the challenges are more interesting than the solution so uh but i'll leave that to uh to the talk maybe talk about it talk about it more at cppcon so uh okay i'm well there's two things i want to ask about but you said the solution is super simple but let's just talk about from the user perspective i want to ask about. But you said the solution is super simple, but let's just talk about it from the user perspective. I want to start using Lambda tomorrow. What do I need to do? If you go to the GitHub repository of the C++ Lambda,
Starting point is 00:33:54 which I believe is going to be linked in the podcast, I have detailed readme and I have detailed examples how to build an example from scratch that uploads a file to S3 or downloads a file from S3 for example or talks to DynamoDB and there's also in the readme, the front page, there is a
Starting point is 00:34:15 hello world one. So you can definitely follow that. It uses CMake. There's heavy influence on CMake. I tried to give step-by-step like, you know, painfully I tried to give step-by-step, like, you know, painfully, I tried to give step-by-step details on how to use it with CMake. So, yeah, the instructions in the README have, they're very CMake-focused because I had to pick a build system
Starting point is 00:34:39 to give a step-by-step instructions, so you can basically copy-paste from the GitHub repo into your machine and it would work. And if you look at the instructions, it's basically you build the runtime from source, and then in your application, in your Lambda,
Starting point is 00:34:58 you do a find package, and you put one line there that says, that creates a CMake target. Once you create that, or yeah, once you build that CMake target, it will run a bash script that will package your Lambda and give you a zip file that you just need to upload to AWS Lambda. And from there, you just, you can invoke it. How you invoke Lambda, there's different ways to invoke it. You can do it via the command line, you can do it via another SDK. You can put an HTTP endpoint in front of it using API gateway, which is a different
Starting point is 00:35:32 service, but it integrates very well with Lambda. So that will allow you to invoke your Lambda from the web. But I don't know if I answered your question. I think so. I get to use my own compiler. Correct. And I'm linking to your runtime. Correct. And then there's a packaging tool that does whatever it needs to to make it runnable with Lambda stuff. That is also correct, yes.
Starting point is 00:35:57 Just make sure that when you use your own, obviously the big emphasis is to use your own compiler, but just build a runtime with the same compiler and build your application with the same compiler that you built the runtime, in other words. That sounds very fair. And it sounds like I'm limited to Linux. Correct.
Starting point is 00:36:18 That's one limitation, and it's not really a limitation. It's just like Lambda runs on Linux, so you have to run on Linux. You can't, for example, build your Lambda on your Mac and try to ship it. It's a different binary. Linux cannot run that. So you have to develop or build, I should say, you have to build your Lambda on Linux. One thing I was going to mention is there is no restriction on
Starting point is 00:36:46 the fact that you can use your own compiler, whatever compiler, means that you can use C++ 17, 20, 11, whatever compiler you want to use. There's no restrictions on a runtime, for example, which is awesome because
Starting point is 00:37:02 for example, if you're using one of the other languages like Python or Ruby or what have you, then you're restricted to whatever runtime that Lambda offers. For example, Lambda, I don't know, I'm just going to come up with arbitrary numbers now, but Lambda supports Python 3.x. Then that's what you've got to use. You can't use 4, assuming there's a Python 4, for example, but there's not.
Starting point is 00:37:30 But you get what I'm trying to say. You're limited to the version of the runtime that Lambda offers. With C++, you don't have that problem because you're just giving it binaries. You've taken what could have been a disadvantage, the fact that you don't know what runtime you need to link to and actually turn that into an advantage because now we can use whatever compiler and runtime we want to.
Starting point is 00:37:55 Exactly. Exactly. I like the idea. I think you said there was an AWS SDK. What exactly is included in that SDK for Lambda developers? So the SDK, I was talking more about the AWS C++ SDK, which I work on as well. And the C++ SDK is separate from the Lambda runtime. Those are two different projects,
Starting point is 00:38:20 two different GitHub repositories. Okay. So the AWS C++ SDK is a library that allows you to talk to the different services in AWS from C++. There is one for Java. There is one for Python. I think we support eight languages. Yeah, JavaScript, Java,.NET, Python, PHP.
Starting point is 00:38:45 Yeah, go. So I would say the most common use cases people would write lambdas and then what do you do when you run a lambda? A very, very common use case is to do something by talking to AWS, another AWS service like DynamoDB, for example, to fetch or store records or, you know, things like that, or Kinesis to, you know, read streams or connect it to like, when you push streams into Kinesis, then your Lambda is getting invoked for every record that is pushed through to process it like logs, you say you have
Starting point is 00:39:24 tons of logs that you want to process in real time, then you just push those logs to Kinesis, and then you connect your Lambda to Kinesis, and Kinesis will invoke your Lambda on every log record that is pushed through it. Let's say you're looking for anomalies or something. Okay, I hadn't considered that part at all. So it's not just about pushing data to it and getting data back out because you have, well, effectively the whole internet you can still talk
Starting point is 00:39:50 to while the Lambda is actually running. It is very composable, yeah. If I were to take my poorly conceived Mandelbrot idea and say instead that I want to do like a rendering pipeline. But you get a Lambda B, you know, render the next frame of the scene. And oh, by the way, all of the scene information that you need is on some Amazon database. Yep, you can do that. If this does not require real time feedback, like it's not like a game and you just want to, it's like ray tracing. I just want, here's my code, process it and give me back the result. I don't care if it's not like a game, and you just want to, it's like ray tracing, I just want, here's my code, process it, and give me back the result. I don't care if it's immediate. Then it's going to be, it's still fast.
Starting point is 00:40:33 It's not like, oh, two hours later. It's just not, you know, 30 frames per second fast, is what I'm saying. Because you're still, and it's not about Lambda, it's just about you're making a call over the internet. So I think something we haven't discussed yet, and you can correct me if I'm wrong, is what does the actual C++ interface look like? It's a function that's being run. How many parameters does it take?
Starting point is 00:40:59 What do they look like? How do I get them? How do I return the result back to the caller? Great question. So if you look at the documentation, this is also outlined and I'll speak to it a little bit. I basically narrowed it down so that you have an invocation request, which will have the payload that you sent it. And this is an arbitrary string. So it could be XML, it could be JSON, it could be whatever you want it to be. So it's just a byte array or
Starting point is 00:41:25 character array? It's a string, yep. It's exactly a character array, yep. I mean, by string, do you mean standard string, std string? Yes, I did that. I didn't make it std string just because it needs to be able to handle text, it needs to be able to handle JSON
Starting point is 00:41:41 and anything. So you can still encode those binary bits as std string. Just don't interpret it as a string when you get it out. And then you do your thing, and then all you need to give back to that function is an invocation response, which has a response code, a success or failure, and it has another payload potential. It's an optional payload.
Starting point is 00:42:09 And that's it. Okay. And I loved it. Super generic. So payload means anything to you. Like it could be binary. Like I have some integration tests where I'm just doing binaries. I'm sending in an image and I'm getting a resized one back.
Starting point is 00:42:24 So how are you seeing anyone like making, I mean, we were just talking about the C++ open source community. Is anyone making like higher level wrappers and APIs and things that, you know, are adding features onto the lambdas that you didn't anticipate? I have not. I would love for people to reach out. Some people do things. I actually had to ask, hey, if,, hey, is anybody using this for anything cool?
Starting point is 00:42:48 When I published the new version, I had a few bugs in the early version, and I fixed them, and I announced that there's a release. And some people responded saying, yeah, we're using this for medical image processing. I'm like, oh, cool. I didn't know that. I mean, I just don't know what people are using it for. I'm completely blind to that. I understand that. Yeah.
Starting point is 00:43:12 But you do know people are using it, it sounds like. Yeah. Okay. Is there anything else you wanted to go over before we let you go, Marco? Nope. But please reach out if you have any questions about AWS Lambda
Starting point is 00:43:27 and let us know what you're using it for. It'd be really cool. Yeah, hopefully someone responds to that because I know that it's really great to get feedback from your users to explain what they're doing. That would be awesome. I remember even
Starting point is 00:43:43 one time on Twitter, somebody reached out to you, Jason, and I follow you on Twitter and I saw that you responded and I chimed in saying, hey, you can do this and that. I don't know if you recall that, but I did help out. That's good. Yeah, no, sometimes it's a lot going on to keep track of everything, but I don't remember how long ago that was. So someone asked about Lambda specifically, and you provided the answer? Yeah, yeah, I think you responded. I don't follow that other person, but I just saw your response to them saying that
Starting point is 00:44:20 you think you know somebody who knows AWS pretty well. So I was intrigued and I looked through the thread and I responded. I said, hey, I'm the author of Flame Day and I can help out here. Right. Yes. Yes. I do remember the conversation now, actually. Yeah. So and now I know someone else who I can refer to when some of my questions come up. That's true. Okay. Well, it's been great having you on the show today, Marco some of these questions come up. That's true. Okay. Well, it's been great having you on the show today, Marco. Thanks a lot, Rob.
Starting point is 00:44:48 Thanks, Jason. Thanks for coming on. Thanks so much for listening in as we chat about C++. We'd love to hear what you think of the podcast. Please let us know if we're discussing the stuff you're interested in, or if you have a suggestion for a topic, we'd love to hear about that too. You can email all your thoughts to feedback at cppcast.com. We'd also appreciate if you can like CppCast on Facebook Thank you. If you'd like to support us on Patreon, you can do so at patreon.com slash cppcast. And of course, you can find all that info and the show notes on the podcast website at cppcast.com.
Starting point is 00:45:35 Theme music for this episode was provided by podcastthemes.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.