CppCast - Software Defined Visualization

Episode Date: March 3, 2016

Rob and Jason are joined by Jeff Amstutz to discuss Software Defined Visualization and Intel's SPMD Compiler. Jeff is a Visualization Software Engineer at Intel, where he works on the open sou...rce OSPRay project. He enjoys all things ray tracing, high performance computing, clearly implemented code, and the perfect combination of Git/CMake/modern C++. Prior to joining Intel, Jeff was an HPC software engineer at SURVICE Engineering where he worked on interactive ballistic simulation applications for the U.S. Army Research Laboratory, implemented using C++, CUDA, and Qt. When he is able, Jeff enjoys academic research in ray tracing and high performance computing, with a specific interest in multi-hit ray tracing algorithms and applications for both graphics 3D rendering and ray-based simulations. In his spare time, Jeff enjoys powerlifting, golf, being an electric guitar nerd, and studying a wide spectrum of music ranging from progressive metal to ambient electronic music. News A bit of background for concepts and C++17 Current Proposals for C++17 Why is more complicated than you think Jeff Amstutz @jeffamstutz Jeff Amstutz on LinkedIn Jeff Amstutz on GitHub Links SDVis OSPRay Intel SPMD Program Compiler

Transcript
Discussion (0)
Starting point is 00:00:00 This episode of CppCast is sponsored by Undo Software. Debugging C++ is hard, which is why Undo Software's technology has proven to reduce debugging time by up to two-thirds. Memory corruptions, resource leaks, race conditions, and logic errors can now be fixed quickly and easily. So visit undo-software.com to find out how its next-generation debugging technology can help you find and fix your bugs in minutes, not weeks.
Starting point is 00:00:25 Episode 47 of CppCast with guest Jeff Amstutz recorded March 2, 2016. In this episode, we talk about some of the C++17 proposals. Then we talked to Jeff Amstutz from Intel. Jeff talks to us about software-defined visualization and Intel's SPMD compiler. C++ Developers by C++ developers. I'm your host, Rob Irving, joined by my co-host, Jason Turner. Jason, how are you doing today? Doing good, Rob. How about you? Doing pretty good. Still settling into the new house.
Starting point is 00:01:42 As you can tell, I still haven't unloaded this room at all. Still a lot of progress, but the rest of the house looks good there's only some small piles behind you yeah just a few piles at the top of our episode i'd like to read a piece of feedback uh this week we got a lot of feedback from last episode where we were talking about hybrid c++ and javascript apps um. Our guest from last week, Sohail, has done a great job of responding to all these comments on CppCast's website and having some good conversations with
Starting point is 00:02:14 other C++ developers who are interested in this type of app development. There are some game developers talking about how they're using JavaScript with Inscripten and C++ to develop their games and game developers talking about how they're using javascript rhythm script in nc plus plus to uh develop their games and uh asking about using js for the ui as well so definitely some great discussion going on there that's really cool i noticed we had a lot of retweets on that episode
Starting point is 00:02:36 also yeah yeah that episode definitely seemed to uh be pretty popular um and it's a shame because we actually talked a little bit more with Sohail after we stopped recording and we're basically discussing how C++ and JavaScript definitely does seem like a good combination for creating a UI application because
Starting point is 00:03:00 although we all like Qt, there's just a lot more available for JavaScript as far as different UI kits. And a lot more developers available also, so hell is saying. Right, so you can basically bring in a JavaScript developer to do the UI while you just focus on the real backend stuff in C++, which is what we're all focused on.
Starting point is 00:03:23 So anyway, we'd love to hear your thoughts about the show as well. You can always reach out to us on Facebook, Twitter, or email us at feedback at cppcast.com. And don't forget to leave us reviews on iTunes. Joining us today is Jeff Amstutz. Jeff is a visualization software engineer at Intel, where he works on the open source Osprey project. He enjoys all things ray tracing, high performance computing, clearly implemented code, and the perfect combination of Git, CMake, and modern C++. Prior to joining Intel, Jeff was an HPC software engineer at
Starting point is 00:03:55 Service Engineering, where he worked on interactive ballistic simulation applications for the US Army Research Lab, implemented using C++, CUDA, and Qt. When he is able, Jeff enjoys academic research in ray tracing and high-performance computing, with a specific interest in multi-hit ray tracing algorithms and applications for both graphics, 3D rendering, and ray-based simulations. In his spare time, Jeff enjoys powerlifting, golf, being an electric guitar nerd, and studying a wide spectrum of music ranging from progressive metal to ambient electronic music. Jeff, welcome to the show. Thanks. It's good to be here. That bio is a lot more of the mouthful when I hear someone say it out loud. All right, let's talk about
Starting point is 00:04:37 powerlifting. What kind of lifting do you like to do? The traditional bench press, squat, and deadlift. I used to play football in college and it's my way of trying to not wither away. So some people run long distances and I lift weights. What, uh, what kind of squat is that specifically in that already? Yeah. Am I here now? You're here now. Uh, what kind of squat is that in the in the traditional power uh power lifting sequence so that'd be a full down to parallel back squat back squat okay front squatting overhead squatting is hard okay well we got a couple news items to talk about uh the first one here is another one of the background articles that Bjorn Stroustrup has been putting out. This one is on concepts, which is a topic we've brought up before on the show, talking with Eric Niebler.
Starting point is 00:05:33 And this was a good one talking about when he first was going to implement templates and how we kind of wind up in the situation where we are today where we desperately need concepts jason is there anything you wanted to add to this well if our listeners haven't really been paying attention he's released several articles recently that are a bit of background about some topic and they're all things that i think he wants to see come in c++ 17 or very soon and it's amazing how many of these things were in the back of his mind for like the past 20 to 30 years that's a long time yeah definitely these things just haven't come up just over the past few years it's been you know on his mind for a long long time which is very interesting
Starting point is 00:06:18 to see uh jeff was there anything you wanted to call out in this article? Nothing specifically other than I really like stories about how something grew up. And so it's cool to have him specifically go back and say, well, this is how we got to this point. So those of us who are debating about the status quo have a chance to kind of reflect on some of the things that maybe we weren't a part of. Yeah, I'd highly recommend every single one of these articles. I think there have been maybe four or five so far, and they're just all great history of C++ and why it is the way it is today. So this next article is a great roundup of some of the C++17 specifications that are going to be discussed at the upcoming ISO meeting in Jacksonville.
Starting point is 00:07:08 And Jens just did a great job of highlighting several articles that specifically mentioned C++17 in the title and just giving you kind of cliff notes version so you don't have to go and read the entire article if you don't have time to um one of the ones i thought was interesting was uh talking about adopting the a newer version of c we talked so much about how c++ has been evolving with c++ 11 and 14 but i didn't really think that c was evolving and apparently there is one paper here about switching c++ 17 to use C11 as opposed to C99, which it's using now. Right, which will lead us into our next article that we'll discuss a little bit too, I believe. Yeah. Was there anything else you want to talk about in this one before we move on? No, not me personally. It is a lot of great information. I do recommend at least perusing it.
Starting point is 00:08:06 Although I do find it interesting how many people are hot to get concepts. And then you have this little note from Eric Kneebler that Jens posted at the very bottom. Eric says, no released compiler implements concepts. So our usage experience with it is zero. I'm surprised so many are ready to ship it. It compiles, ship it. Right. experience with it is zero i'm surprised so many are ready to ship it it compiles ship it right yeah and one of the things to mention here is it kind of does go over uh these articles are proposing various uh ts should go into c++ 17 and some are saying this one isn't ready yet so
Starting point is 00:08:41 transactional memory for example they're saying is not uh has not been tested enough it's not been used enough to uh be proposed for c++ 17 yet and i'm personally ready to see file system come in because i've been waiting like seven years for that now yeah file system is one of the ones listed here so that should be good okay yeah, let's talk about this other one, which is an article coming from Red Hat's developer blog. And it's talking about pollution between C and C++ headers. Jason, do you want to dig into this one a bit? Yeah, it's interesting because theoretically, if you only include C string instead of string.h,
Starting point is 00:09:23 you should have to use std colon, you know, whatever str compare, whatever for your functions, std compare, excuse me. But that's not really what I've ever really typically seen happen. It seems that you always have both the standard library functions available globally and in the standard um namespace but you know it's interesting this the article digs into why that's the case and how most compilers basically implemented the standard backward and and how uh how much of a mess it really is to try to manage all of that and it, it's looking back to the last article we were discussing. There's even more stuff that C 11 is going to bring in that I'm guessing would even compliment the implementation of these headers even further from a
Starting point is 00:10:16 C++ standpoint. Right. Jeff, is there anything you want to add to this? Well, kind of less to this one. I already was very aware of how complicated the landscape of making C and C++ compatible from the implementers back end is just really ridiculous.
Starting point is 00:10:34 Bravo to all those guys at Red Hat and other places implementing these things. It's really complicated. But one thing I just wanted to highlight actually about the last one article was that my personal perspective on some of these TSs coming into the standard is that kind of like what Eric Niebler was mentioning, I don't view a TS as a failure specifically because the issue with something being in the standard doesn't mean that an implementation comes with it. And so it's really the issue is what compiler do you have available to you? And what does that compiler support? So whether something in the standard or not, if something is a TS, and then if it all checks out, it'll be taken in the standard anyway, that generally is a very low overhead conversion, you know, maybe change some header names, maybe an occasional function call name or something that might have changed. But for the most part, you're already using what could have been standardized. And so once something is in the standard, that's a really, really big deal,
Starting point is 00:11:35 a really big cementing process. So I'm kind of on his boat with a little surprised why people would want it in the standard so fast. If you really want concepts right now, if someone implements it, please go try it, go build something with it. It doesn't have to be in the standard to be usable. I guess once it's in the standard, it can't change. Yeah, well, it's a lot harder to change, yeah. And that's a good point.
Starting point is 00:11:58 I'm just ready to see some of these things have widespread compiler adoption, whether or not they're actually officially part of the standard. File system is a big part of that. It should have been there a long time ago. I agree. Yeah. So Jeff, can you tell us about the software defined visualization initiative that you work on? Yeah. So, um, you know, I'm a software engineer and, and I work on, um, the Osprey project specifically, but Intel, with its three open source projects, Osprey, Embry, and OpenSwer, are trying to do computer graphics on CPUs. And so doing this all in software, this would be where we're targeting the big end CPUs like the Xeon series and the Xeon Phi chips. And the whole concept there is not all use cases in rendering, making interactive 3D graphics.
Starting point is 00:12:53 There's kind of a spectrum where you want really, really fast rendering in real time for games. You have like a 60 or 30 frame rate budget to squeeze all of your rendering logic into. And then the other end of the spectrum is offline rendering, which a lot of the film industry folks use. Like if you go to an animated movie, each one of those frames might have been rendering on a big compute farm for 24 hours or more. And they do that for every single frame of those entire movies. And so in the middle is what we're targeting, which is scientific visualization, where scientists who are at the big DOE labs, they have really giant data sets that
Starting point is 00:13:32 probably are not going to fit into a GPU or do not subdivide well into a cluster that has lots of GPUs. And they want to render their data set interactively, so like 5 to 15 frames per second. And so what we do is we provide a nice scalable solution, whether you want to use rasterization, which would be like OpenGL, or a ray tracing solution to render an interactive scene. So maybe a scientist wants to explore the data that came out of a big simulation. And so software defined visualization means, hey, CPUs that you're already writing other software for, they can do graphics really well too. So what kind of data are you talking about visualizing that the average user
Starting point is 00:14:16 might be looking at? So it's a wide range. So one example would be a volumetric data set, like computational fluid dynamics, where I might do some fluid simulation of like a gas traveling or like if you take a flame, for instance, that's a good one. Where I have plasma and then I want to see like smoke plumes and heat travel. And what you do is use a giant 3D grid to represent, you know, temperature flow and velocity of materials and all those. And you let that simulation go and you can see how where like smoke will go or something. These larger DOE labs have very specific large scale simulations, like maybe airflow over a helicopter blade or something. And they get really, really, really tiny in the resolutions for how they are very specific with all the physics to get that to work to understand that science. So what we do is we take, you know, that 3d grid,
Starting point is 00:15:09 and we can trace race through it to create a visualization to create an image of that data, so they can see it and not just, you know, use reductions to create some number that might mean something now they can visually see what they're physically simulating. Interesting. We've talked a little bit about rasterization and retracing. Can you go over some of the differences there for someone who's not too knowledgeable about graphics? Yeah, so rasterization, the way it grew up,
Starting point is 00:15:40 it was the way that everyone did graphics, at least interactively, because GPUs, when they were first coming out as becoming common cards to install in everyone's machines, they had this fixed function in hardware pipeline to, you would take a primitive, like a triangle. Triangles are generally the most common primitive we use to make 3D scenes. And we take that triangle and rasterization says we're going to go and paint each pixel that triangle covers on the screen. And so we're going to do that for all the triangles in the scene. And then we can use like a depth buffer to make sure we get the first surfaces that are visible. And then you can get a 2D image out of a 3D scene that way. Ray tracing, on the other hand, is where you take all of your triangles,
Starting point is 00:16:28 you subdivide it into a tree data structure that subdivides space. So your root node has the entire scene, then you subdivide based on where there are more triangles. So you get down to a leaf node, which has individual triangles in it. Then what we can do is a mathematical ray intersection against that tree and march that ray down and only test against the individual triangles that that ray is most likely to intersect and we can use that then to get um you know pixel color so we shoot a ray through each pixel to then get that color back um And so the two techniques have two different scaling behaviors.
Starting point is 00:17:06 So a GPU might be able to deal with, in hardware, like NVIDIA, Intel, and these GPUs, they are in hardware doing this triangle rasterization, and that scales well with pixels on the screen. It's easier to have a really big frame buffer as long as you're keeping the number of triangles you're having to rasterize down, where ray tracing scales much better with data size. So we have to do more work with a larger frame buffer, like a higher resolution screen.
Starting point is 00:17:33 But we can, using a logarithmic algorithm to get down to the individual primitives, we can scale up to even billions of triangles and still interactively render it because that's all amortized over that tree traversal to get down and only test small numbers of triangles does that make sense so you work on yes i think so rob yeah cool so you work on software that can do either of these things right yeah i specifically work on Osprey. And then my group that I'm a part of, can does both. And Osprey is ray tracing? Yeah. Okay. Can we talk a little bit about some of these other components before maybe digging into Osprey in a little more depth? Maybe Embree to start? Yeah, so so Embree is is Embree. Yeah, yeah, it's Intel's ray tracing kernel. So if we're talking about layers of abstraction, Embry will take a bunch of
Starting point is 00:18:31 geometric surfaces, not volume rendering, but just triangles, and you can give it other types of primitives like spheres or cylinders. It's an extensible API. And then it will build an acceleration structure, that tree that I talked about. And then it has a very simple interface to trace one or a couple of rays at a time against that scene in a thread-safe manner. So I can just start throwing rays and getting intersections back. That's all Embree does.
Starting point is 00:18:59 Then Osprey, on top of that, builds an entire rendering framework. So we might, like a photorealistic rendering means we want to bounce light photons around to see where light goes, and that's how you can get a photorealistic image. And so Osprey gives you the tools to worry about that problem and just use as Embry. And then OpenSWR is the OpenGL rasterization library that we're upstreaming into Mesa. So your Mesa software OpenGL driver will now just be this high-performance rasterizer from Intel. Sorry, what is Mesa? If you've ever installed Linux and then not had a graphics card, you can still even do 3D graphics all on your CPU using this backend called LLVM pipe. And what they do is kind of what I described that we're doing, but we're trying to make it fast and competitive with hardware GPUs. As a Linux user, I've always thought of Mesa as the software version of OpenGL.
Starting point is 00:20:09 And that's exactly what it is. And so what we're trying to do is put our core rasterizer inside of Mesa to make that fast. Okay. Okay. So can we talk a little bit more about what some of the use cases are for a library like Osprey? Yeah.
Starting point is 00:20:24 So Osprey is for anyone who wants to do rendering. So the Osprey API lets you define some higher level actors. You could think of it, Osprey and OpenGL kind of being at the same level of abstraction where you say, hey, here's some geometry. Here is a camera at this place in space, in 3D space. Go render me some frames. And so Osprey is extendable in that I can create any kind of camera,
Starting point is 00:20:53 like a fisheye camera. We have a perspective camera, an orthographic camera, basically ways to generate these primary rays that we shoot through the pixels. You know, you can, we have some package renderers, you know, like one that only does ambient occlusion, we have a photorealistic path tracer, we have a renderer that's focused on visualization rather than like photorealism, we call the sci vis renderer, and then an extensible way that you can write your own. And then we have different geometries and volumes that you can define, like take your data and put it into, make it a triangle mesh or, you know, take a volume and give it to Osprey.
Starting point is 00:21:33 And then with all those components together, you can say, okay, Osprey, render me another frame. Render me another frame. And then you can take those pixels and put them on screen, save them to a file, kind of do whatever you want. Yeah. can take those pixels and put them on screen save them to a file kind of do whatever you want um yeah so i'm trying to wrap my mind around what kind of data you would be visualizing that you would want photorealistic rendering for it ah so so really big use case for that is for folks that do architectural planning like i want to see how a building is going to look before i go and break ground on it okay so it's not just things like uh rendering yeah like uh you were talking fluid dynamics and that kind of thing you can't okay yeah so we have a reason for photorealism you know the the big shops that are doing it like
Starting point is 00:22:17 like jaw-dropping photorealism like movies uh and such um you know they could use osprey but they already have millions of dollars invested in their tool set. So they use Embry, because they can accelerate just the ray intersections, but Osprey is, you know, trying to be a bigger solution. Interesting. And so all these products you've mentioned so far, all open source, free and open source. So if you go to sdviz.org, that's how you can kind of get to all of the other projects, and they're all up on GitHub. And so we encourage people to go and check it out, especially with OpenSWR. It's just a GL driver.
Starting point is 00:22:56 You can set your load library path to it and just use it. It's that simple. Are you getting good feedback from the community, people contributing? Yeah, we've had some minor contributions. What we've been more interested in is trying to get people to integrate, especially like Osprey that requires a little bit of programming to use it because it's a new API, is to integrate it into their tools. And so more of our contributions are trying to integrate it into existing visualization tools and other rendering tools than for people to help us implement Osprey itself. Of course, we're open to that, but we'd love an Osprey rendering capability and as many
Starting point is 00:23:41 applications that are out there as we can. Do you have any examples of that, of existing applications that now have the capability of using Osprey? Yeah, so Kitware has been a nice partner of ours. They create CMake, which to me is a fantastic tool. And they also have a couple of other software packages, VTK, which is the visualization toolkit. That's a library you can build a VizApp on.
Starting point is 00:24:07 And then they have made a VizApp itself called ParaView, which is a very, very commonly used open source general purpose visualization package that scientists and engineers, all kinds of people use. They'll take their data and then they'll apply some transforms into it to make a visualization. And then they've incorporated Osprey into it to help do the heavy lifting for rendering. And so Paraview, there's Visit, which is a much more community oriented visualization package. It's also open source.
Starting point is 00:24:42 It's a DOE tool. CEI Insight. The list is fairly long, but there's plenty of examples out there. I wanted to interrupt this discussion for just a moment to bring you a word from our sponsors. You have an extensive test suite, right? You're using TDD, continuous integration, and other best practices, but do all your tests pass all the time? Getting to the bottom of intermittent and obscure test failures is crucial if you want to get the full value from these practices.
Starting point is 00:25:12 And Undo Software's live recorder technology allows you to easily fix the bugs that don't otherwise get fixed. Capture a recording of failing tests in your test suites and debug them offline so that you can collaborate with your development teams and customers. Get your software out of development and into production much more quickly and be confident that is of higher quality. Visit undo-software.com to see how they can help you find out exactly what your software really did as opposed to what you expected it to do and fix your bugs in minutes, not weeks um when you first reached out to me on twitter it was to talk about intel's spmd compiler which i guess osprey uses can you tell me tell us a little bit about that so ispc um is is to me one of the better kept secrets about uh vectorization solutions especially especially for CPUs.
Starting point is 00:26:10 So CUDA is this great technology that NVIDIA came out with that I think was what first popularized very greatly the SPMD programming model. And the whole concept there is when I have vectors, vector instructions, I'm doing single instruction, multiple data instructions on my CPU, on my GPU. And the difficulty there is there's generally been a spectrum with no middle ground where you could be a brave soul and go down and write CPU instruction intrinsics and have all kinds of fun just compiling or writing straight to AVX or AVX2 or SSE.
Starting point is 00:26:48 And that can be a pain. And then the other end of the spectrum is, well, let me put a pound pragma on like a for loop and just hope the compiler can figure it out. Where ISPC, while it's a unique language, it's a new language, it's C basically with a couple of keywords that you can basically tell the compiler how you want it to write those intrinsics for you. And so the concept is, I can say, if I have a variable declaration in C, then I can say that this variable is going to be a vector's worth of this type, or I can say it's a scalar single
Starting point is 00:27:26 version of this type, like an integer or floating point number. And then the idea with with with spin deprogramming is that I write scalar looking code. And then depending on the data that comes into like a function that that's going to execute, we might deal with control flow with masking instructions. So for instance, if I take a vector's worth of bools, and I have an eight-wide AVX vector of bools, and I want to say if this vector instead of if a single boolean, and let's say the first three evaluated to true, and then we have an if else.
Starting point is 00:28:07 So that means that the compiler would mask out the remaining five, disable those vectors, do the if true with those first three vector lanes, and then it would flip the mask and then execute the else. So it can handle managing a program, an entire program on each one of these vector lanes. That's why it's called single program multiple data. So like if I'm on a GPU on like an NVIDIA graphics card programming in CUDA, I write this scalar looking code, and then the hardware will take care of figuring out, well, how many vector lanes are actually going to execute these instructions based on how my data came in. Did that make any sense? I think so. It can be kind of confusing to wrap your head around. But the big advantage with with programming in this way, is that I can write vector well vectorized code once and then have
Starting point is 00:29:08 that map really well to multiple vector lengths. So I can write once and have good SSE performing code, I can have good AVX performing code, AVX two AVX 512 on the new Knights landing processors, it'll be coming out soon. And then ISPC, because it's open source, of course someone decided to write a backend for ARM, for ARM vector instructions, and someone wrote a PTX generator so you can write NVIDIA GPU code using ISPC. So I think it's a really nice testament
Starting point is 00:29:40 that you write this code in this form and you can map to all kinds of different vector widths and instruction sets and it performs really well so kind of from a practical perspective if you're using ispc how does this integrate with your build process yeah so um what we do in osprey is we take like an add executable where we would have both CPP and ISPC files, and we forward that. We make a macro Osprey add executable, and we would take those,
Starting point is 00:30:14 and we would take all of the ISPC files and forward it to a custom command that invokes ISPC with all the options that we care about. And then there's two ways you can integrate ISPC. You can have ISPC generate you C++ that then you just hand your C++ compiler as normal. Or the even easier route, the one we use the most is have it generate for you object code that you simply link in your application. So whether you have a.o or a.cpp that you get from ISPC, you can just hand that to GCC or Clang or ICC and it'll perform the same on all three because it's generating all
Starting point is 00:30:55 of that low-level code instead of having your compiler you're handing it to have to worry about all those optimizations and vectorization and all that. So like any other code generator like Swig or something like that, but what platforms does it support? So right now, it's really the most robust on x86. So really any of the vectorized instruction sets, so this would be SSE and above. So if you're on even like the old Nehalem CPUs
Starting point is 00:31:30 all the way through Sandy Bridge, Ivy Bridge, Haswell, whether you're on a laptop CPU or the giant Xeon chips, it's really per instruction set. And like I said, you can generate GPU code with it. I don't know how robust that works, but you technically can do that. And Windows, Linux, Mac operating systems also? Yes. All three operating systems and ISPC is open source. They post binaries regularly with every release, which is what we use, but you can even compile it, contribute. It's nice and free. Interesting.
Starting point is 00:32:04 Very cool. Have you done much use of CUDA? Yeah, so CUDA is really interesting because prior to joining Intel, I used to work on ballistic simulations. And what we did there is we evaluated, you know, are we going to use like OpenGL fragment shaders and compute shaders? Are we going to use CUDA on GPUs?
Starting point is 00:32:32 Or can we look at different ways to get good performance on CPUs? And so in that, I got some experience programming these GPUs. And just one thing I'll say is when you're looking at really that high-performance computing and getting as much compute out of these chips as you can, modern chips, they all have the same desire, is they want parallelism. And so whether you're on a GPU, you have to hand it a ton of work to get it to really light all of the hardware. And on CPUs, it's the same story. I need to fill my vectors. I need to make sure every core is busy with
Starting point is 00:33:07 threads doing work. And then, of course, in an HPC environment, we want all of our nodes lit with work as well. So you've mentioned ISPC can generate CUDA or NVIDIA GPU type code. Have you used any other higher level C++ wrapper or anything for that kind of work? No i used to play with opencl a little bit and there's some nice at least when i was looking at it there were some nice c++ wrappers for that um but i think nvidia's programming model where you hand their compiler um code that's both gpu and cpu um all in one translation unit, and then it'll take care of mapping it to different devices. It's kind of similar to the OpenMP
Starting point is 00:33:52 new offloading pragmas, where I can say, hey, run this on a coprocessor and run this locally on the CPU. It's a pretty convenient way of programming those devices to not have to do too much craziness. I know with what I described in ISPC, it doesn't sound simple, but it's not as bad as you might think. I think it makes sense,
Starting point is 00:34:15 having personally worked a lot with code generators, and maybe for our listeners who didn't catch on to when you were talking about that, you were talking a lot of CMake implementation details. Oh at executable and all that i'm sorry i breezed by that custom commands yes it's all cmake talk yeah yeah but it's capable of doing all those things cross-platform which is important oh absolutely absolutely and the other cool thing about ispc is you know it it shares memory space with C++. So I can pass pointers between, like, functions.
Starting point is 00:34:49 So I can do a C++ side function and an ISPC side function. And, you know, I don't have to do, like, on a GPU, I need to transfer memory to the device and maybe download memory back after I've done some computation. In ISPC, on your CPU, you know, it's just memory. So it's very simple. Yeah. Very cool. Jason, since you know, you're just, it's just memory. So it's very simple. Yeah. Very cool. Uh, Jason, since you're not going to ask, uh, Jeff, I was wondering about, uh, your project that makes use of, uh, Jason's ChaiScript project. Oh yeah. Um, so when I first joined the team,
Starting point is 00:35:19 um, I, I found myself wanting to occasionally like poke a value into the OSPRA API. So for instance, an ambient occlusion renderer, what it does is it'll trace a primary ray into a scene, get an intersection. And then what it'll do is trace rays outward from that intersection to try to figure out how much light is arriving at this point. And it'll brighten it based on how much light that calculation comes up with. Well, for tuning purposes, you can say, how many of those secondary rays am I going to trace per frame? I can have that image resolve over time and be nice interactive, or I could trace all those secondary rays at once
Starting point is 00:36:01 and just get a nice-looking image when I finally get a frame. And so when I want to test things like that, I just need to do an Osprey call sometime at runtime and maybe a little console window or something. And I've known about things like Swig and other scripting environments. And I honestly was listening to the podcast and you talked about ChaiScript and I was like, I'll go check that out. And then I was like, my goodness, it is so simple to just make a binding to this environment. So I can turn my console window into a little scripting window
Starting point is 00:36:37 and fire values live on the fly to Osprey and see what happens, which has been really fun and very, very useful for debugging. And so that little app, if you do a make install somewhere of Osprey on your machine, you can get that app off the source code off GitHub and build it. I distribute ChaiScript with it, so you don't have to do anything there.
Starting point is 00:37:00 It's nice and self-contained. It's pretty fun. I'm glad it's working for you. Yeah, I've been a silent user i i've gotten the impression that's most of my users well that that's actually a testament to a really really well-crafted software product it's if no one's complaining or if not very many people are complaining then then you're doing a great job. Wow. Thanks. Yeah. Okay. Well, where can people find you online,
Starting point is 00:37:28 Jeff? Um, so I'm on LinkedIn and Twitter. I don't, I don't tweet all that often. Um, and if you really want to get my attention, of course,
Starting point is 00:37:41 file a bug on the Osprey issues list. It's something broken or you want to feature. um, yeah. and then i think my emails in the will be in the notes the show notes sure oh one other thing i meant to ask you was you're actually at a conference right now did you give a talk today uh i i did not give a talk today i'm at the the rice um oil and gas hpc conference so a bunch of oil and gas HPC conference. So a bunch of oil and gas folks using supercomputers for, for looking at drilling and, and all of that fun energy stuff that, that they're doing lots of volume rendering.
Starting point is 00:38:15 But, but I did give a talk this past weekend at I3D. Okay. And that, that conference was a computer graphics conference. And I got a paper accepted into a journal called JCGT, the Journal of Computer Graphics Techniques. And I was invited to present that paper at the i3D conference in Redmond.
Starting point is 00:38:36 So that was a very, very fun and rewarding experience. Cool. Any more conferences coming up? Well, you'll always find us, if not me. You'll find someone from our group at places like SIGGRAPH and the Supercomputing Conference every year. But I'm personally going to do my best to make it to CppCon. And I actually would like to take ISPC and go through a deep dive for how you would use it to vectorize parts of your performance sensitive code and turn that into a CPP con talk.
Starting point is 00:39:13 That sounds like a great talk. It doesn't like something you should submit. Yes, it'd be fun. It's harder in a podcast to describe all these things, but maybe with some slides that would help. It's also a great conference. It's worth going to. Absolutely. Very cool.
Starting point is 00:39:29 Well, I think that's it for me. Yes. Thank you for joining us. Thanks. Happy to be here. Thanks so much for listening as we chat about C++. I'd love to hear what you think of the podcast. Please let me know if we're discussing the stuff you're interested in,
Starting point is 00:39:42 or if you have a suggestion for a topic, I'd love to hear that also. You can email all your thoughts to feedback at cppcast.com. I'd also appreciate if you can follow CppCast on Twitter and like CppCast on Facebook. And of course, you can find all that info and the show notes on the podcast website at cppcast.com. Theme music for this episode is provided by podcastthemes.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.