CppCast - Oculus Research
Episode Date: November 23, 2017Rob and Jason are joined by Dave Moore from Oculus Research to talk about the Oculus C++ SDK and Augmented Reality. Dave Moore started programming after getting fired from his college work stu...dy job. This worried his parents, but it seems to have worked out in the end. After spending 17 years in and around the computer games industry, most recently at RAD Game Tools, he's now a software engineer at Oculus Research, working to advance the computer vision technology underlying virtual and augmented reality. News Cheerp the C++ compiler for the Web The wrong way of benchmarking the most efficient integer comparison function Programming Accelerators with C++ (PACXX) What should be part of the C++ standard library Dave Moore @dmmfix Links Oculus Developer Center Oculus Research Oculus Connect 3 Opening Keynote: Michael Abrash Sponsors Backtrace JetBrains Hosts @robwirving @lefticus
Transcript
Discussion (0)
This episode of CppCast is sponsored by Backtrace, the turnkey debugging platform that helps you spend less time debugging and more time building.
Get to the root cause quickly with detailed information at your fingertips.
Start your free trial at backtrace.io.cppcast.
And by JetBrains, maker of intelligent development tools to simplify your challenging tasks and automate the routine ones.
JetBrains is offering a 25% discount for an
individual license on the C++ tool of your choice, CLion, ReSharper, C++, or AppCode.
Use the coupon code JetBrains for CppCast during checkout at JetBrains.com.
Episode 127 of CppCast with guest Dave Moore recorded recorded November 22, 2017.
In this episode, we talk about how to benchmark your code.
And we talk to Dave Moore from Oculus research.
Dave Moore talks to us about the Oculus SDK and his work on augmented reality. Welcome to episode 127 of CppCast, the only podcast for C++ developers by C++ developers.
I'm your host, Rob Bervink, joined by my co-host, Jason Turner.
Jason, how are you doing today?
I'm doing good, Rob. How are you doing today i'm doing good rob how are you doing i'm doing okay getting ready for uh for thanksgiving yes that's tomorrow for our listeners who wonder how we time travel yes yes but i actually say it's actually yesterday i guess
if they're listening to this yeah or maybe a couple days ago if they don't get to it until
next week a couple days ago because i had an interesting conversation with some of our listeners in new zealand that were like oh i love when your
show comes out every saturday morning like wait a minute that's not when we put it up
yeah i love those time changes yeah okay well on the top very episode i'd like to read a piece of
feedback uh this week we got an email from voitek and i can pronounce that name correctly
because i actually wrote out how to pronounce his name which i really appreciate wow and yeah that's
that's perfect yeah anyway he writes in uh thanks for the great show for me friday is the first day
of the week what do you say about some guests from poland and he has a couple suggestions of
notable sequels plus guests from poland. You should pronounce their names also.
He did not give me pronunciation guides for these names,
so I'm going to butcher all these.
So there's Andrei Kresmensky,
who's the author of arcsemi.wordpress.com blog,
mentioned a few times on the show,
and he gave some talks at CodeDive and CodeEurope.
Bartolomij Flipek, he also has a blog, flippick.com yeah bartek i think he goes by i know
we definitely mentioned uh his blog a couple times definitely yeah and uh bartos zurgat who gave a
few talks at embo c++ and codive and he says he actually knows him personally so yeah um i would
definitely be up for having any of them on.
You didn't meet any of them when you were at Meeting C++ by any chance, did you, Jason?
I know you said you met a lot of other European guests, not just German.
I don't know if I did.
There's a lot of people at that conference and a lot of names going around, yes.
Yeah.
Sorry.
It's okay.
Well, we'd love to hear about your thoughts about the show as well.
You can always reach out to us on Facebook, Twitter,
or email us at feedback at cpcast.com.
And don't forget to leave us a review on iTunes.
I also wanted to mention really quickly that we are on Spotify now.
So there's another place where we can go and listen to the show.
Apparently Spotify just started doing podcasts a couple months ago and we were able to get in there which is pretty cool yeah
and another thing i just want to mention really quickly is um everyone on the internet is talking
about net neutrality right now and if you're in the u.s it is kind of a big deal i think it's kind
of a politically neutral topic i don't ever want ever want to get my own political stuff into the show.
If you're interested in which way I fall on the spectrum,
you can look at me on Twitter.
But I feel like net neutrality is something that anyone should care about
because if you need to pay extra to get a different internet plan,
you might wind up having to pay extra to get to cppcast.com,
and that would be horrible, right?
Yes, that would be horrible right yes that would be horrible yeah although
we're probably a small enough fish that we wouldn't get singled out but hopefully not but
you know they might try to get people to pay more to make their websites faster or something like
that and that's not something or just the fact that you're streaming something perhaps from the
website yeah exactly so it's it is something that could affect podcasts in general so it's it's worth
calling your congressman if you have a moment to do that anyway joining us today is uh dave moore
dave started programming after getting fired from his college work study job this worried his parents
but it seems to have worked out in the end after spending 17 years in and around the computer games
industry most recently at rad game tools he's now a software engineer at oculus research working to advance the computer vision
technology underlying virtual and augmented reality dave welcome to the show thanks happy
to be here that was a sweet work study job i got fired from too that's what i was wondering what
was it uh it was a job in the registrar's office, which was a real cush work-study job.
You know, you file people's grades.
You sort of go look up transcripts.
But the reason that I transitioned to programming is I'd been spending nights, like, teaching myself C and C++ down in the lab
and managed to sleep through registration day morning, which is, it turns out the only day you can't be late for
work if you work at the registrar's office. Like I said, it worked out in the end.
Okay. So this is fascinating, I think, maybe. Was C++ your first programming languages?
It was, yeah. So my family didn't have any computers that had compilers on them. So the
first contact I had with a programmable computer was a Unix mainframe in college. So I started with
GCC and just worked up from there. Okay. If you don't mind my asking,
what was the... Well, no, instead of asking you what year that was, I'll ask what GCC version
was that? Oh, wow. It's way back. It was in the one or two
series, right? So no, I graduated from college in 97. So this was pre-C99. This was after, I think,
most of the C++ had sort of settled out. But I was not any great shakes back then, for sure,
as a programmer. So I don't even remember the details of the language
mostly so well that's that's cool i mean i've had some conversations lately with people about
teaching c++ and learning c++ and yep i started with basic and i remember very distinctly that
actually learning how to do structured programming like what why would i want to function and what
would that gain me that was like a huge learning curve for me.
And I was wondering if learning and C on Unix mainframe,
is there any particularly brain damage that you got?
Or maybe even good experience that you got from that?
So I was actually particularly lucky.
I went to Reed College.
They had an academic software development lab,
which was a really sort of fancy way of saying the people who
administer the Unix mainframe. But they had some people who liked to program and would do little
projects for the professors. So I had a bunch of really good tutors and mentors, including people
like Nelson Minar, who went on to do a whole bunch of stuff at Google after he graduated from Reed
and then the Santa Fe Institute. So I had had really really good teachers so i hopefully not too much brain damage
and then i was doing also i got some uh internships at a local game company in my
hometown during my college experience as well so i also got uh good practices beat into me there as
well that's well that's that sounds like fun and interesting because the games industry is not necessarily
100% known for programming with best practices.
Correct. Yeah, for sure.
But Dynamics back in the late 90s.
Oh, Dynamics.
Yeah, it was a great place.
I very much lucked into that.
I grew up in Eugene.
Okay.
Got my first job there
by just dropping a
resume off at the front desk, which is a harder thing to do these days. Yeah, now there's actual
courses of study to become a computer game programmer. So it's a little bit more difficult
to do it that way. But like I said, I lucked out. Yeah, I wonder how many of our listeners
you just made supremely jealous for being able to get that opportunity.
I know, you know, because you bump into people people you know just starting their careers at gdc or whatever
um and it's really common to get asked the question you know how do you break into the
games industry i just have to say i have absolutely no idea right so that's awesome
okay well dave uh we got a couple news to discuss, and we'll start talking to you more about Oculus
and your work in the research team and the SDK, okay?
Sure, yeah, absolutely.
Okay, so this first one is Chirp,
and that doesn't sound quite right to me,
but I guess that's how you pronounce it,
and that's the C++ compiler for the web.
And, Jason, you know, we haven't really talked a whole lot about WebAssembly
since we first had JF Bastion on like a year and a half ago.
But this seems pretty interesting.
It's a thing now.
And there are now enterprise compilers that are targeting WebAssembly
competing with Emscripten, I guess.
When I first looked at, when I first saw the link,
I'm like, I already have a web-based compiler.
It's called Compiler Explorer.
Yeah, but that's not what this is at all.
This is not what this is at all.
No, no, no.
But yeah, I just saw news, what was it, just a couple weeks ago that now all three of the major web browsers support WebAssembly. bit it seems that even if you still need to worry about older browsers like i guess old internet
explorer or something that don't have web assembly support it can compile to both web assembly and
javascript at the same time wow and then you know the browser would just use whatever is better
so that's pretty cool um it's this uh chip claims to be to be faster and compiler to a smaller size than in the scripting.
And it does have an enterprise-grade support.
Or if you're working on an open-source project, you could use it as long as you're comfortable with that license.
I'm going to have to give this a try at some point.
Is this anything that you've worked with, Dave?
No.
That would be something much more interesting probably to people on the Facebook-y side of the business.
I mean, Oculus is owned by Facebook.
We're still almost certainly the only Fortune 100 company that ships most of its code in PHP.
So this would probably be super interesting to someone who does that.
But the inertia of that is still pretty amazing.
Wow.
Yeah.
But it's a special flavor of PHP that gets compiled to C++ or something, is that?
No, well, so they do have their own
internal special flavor of PHP
where they've added things like static type checking
and, you know, well,
any type checking at all, actually.
Right.
So, um, but I think they sort of made, uh, a pretty mature engineering decision that
was like, you know, we could sit back and rewrite all of this, or we could keep what
we have, try and improve the environment around it.
And, uh, that's worked out pretty well for them.
I don't have anything to do with that side of the business though.
So,
right.
Okay.
Uh,
next article,
we have,
uh,
the wrong way of benchmarking the most efficient integer,
integer comparison function.
And this is on the old new thing blog,
which is,
uh,
Raymond Shenz from Microsoft,
which is always a really good blog to read.
And he's going over different ways of benchmarking
really simple code and actually going through
and it looks like he used Compiler Explorer in this to actually look at how the assembly between some of these
different comparison functions works.
I think a great takeaway from this was
the original test was summing the results or something.
And he said this is a useless test because you're not using the function in any way that would actually be used.
Right. So he writes a benchmark test that's actually doing a legitimate usage of the code that you're trying to benchmark.
And that's his big argument that you should always be really using the code.
Otherwise, the compiler's optimizer might not behave the way it would in a real world scenario
yeah yeah and i think this is only the second time we've mentioned raymond chen
uh maybe maybe i know he's always writing really great blog posts though yeah i love that blog
i just thought yeah i would mention that if our listeners don't know i mean he's been around for like 14 years as he's been writing
articles consistently there's a lot of articles on his website yeah and actually looking at his
website and it looks like he actually wrote a book called the old new thing that would be worth
looking at i don't think i knew that yeah i tell you i always check his blog whenever i've got
frustrating questions about microsoft software like why don't they just knew that I always check his blog whenever I've got frustrating questions about Microsoft
software like why don't they just do this
and then you go to Raymond's blog and you find out why they don't
do that or why they did it the way they did it
that is true
yeah
okay this next thing is
PAC-XX
programming accelerators with C++
and it looks like this
is another kind of replacement for CUDA.
And this one has C++ 11 and 14 support.
And Dave, this sounds like something you might be familiar with
because it's about improving GPU speed.
Yeah, we haven't played around with this particular library,
but that's definitely right up our alley.
There's tons of stuff that falls. We use CUDA all over the place on both the VR and the AR side of Oculus Research.
So anything that makes that easier or brings the programming models closer together is way better.
So we'll keep an eye on it for sure. And then this last article we have from
Funathan's blog, what should be part of the C++ standard library?
And I thought this was a pretty interesting article.
He starts off by talking about the 2D graphics proposal,
which Guy Davidson, who we've had on the show, has talked about.
And maybe a lot of people won't really be using that,
but is there going to be some benefit from standardizing it?
And he goes into kind of an in-depth thing where he talks about,
in a perfect world, what would C++ be like
and what should be standardized and what shouldn't be.
And he goes into the whole, remember the left pad controversy
that happened with all those JavaScript libraries a while ago?
So yeah, we definitely don't want to go that far
where we use everything from libraries,
even if it did become trivially easy.
It definitely makes sense to have things in the standard.
I don't know.
What did you think about this, Jason?
I think he did a good job of covering the various issues
around deciding what should or shouldn't be
in the standard library uh and you
know i feel like he mostly i feel like he mostly left the question open what should or shouldn't
be in the library standard library yeah and he kind of finishes up talking you know maybe this
2d graphics doesn't need to be there but you know there would still be some benefits like
having new vocabulary types
then other graphics libraries could use in their own library yeah yeah dave is this something you
think oculus might like build upon if there was a standard graphics library if that yeah if there
were vocabulary types those would definitely come into use um It's interesting, at RAD, we shipped on so many platforms,
we were way, way behind the standard.
In fact, all of the APIs that we shipped at RAD
were in C, right?
So internally, we implemented them in C++
or did whatever.
But sort of the public-facing version
of all the APIs was straight C
just because that's the most binary compatible.
You've got linker support across tons of different compilers at Oculus.
It's, you know, we've advanced a little bit farther, but that the,
if you look at the Oculus SDK header at least at one point,
it's still parsed as straight C at one point.
And it sort of had that sort of suit and tie,
like very conservative external facing
front but you know any sort of vocabulary types relating to 3d math or spatial computation would
be super useful and those would those would probably go in for sure cool yeah well let's
talk a little bit more about what your role is on the oculus. Yeah, so Oculus Research is,
well, it used to be very distinct
from the main Oculus team
that ships the headsets
and sort of works on the software systems
that people, you know,
you go to Best Buy,
you buy an Oculus headset,
you install our software
and start playing games.
That's the part of Oculus
that's sort of headquartered in Menlo Park. Oculus Research is,
we're located in Redmond, Washington, primarily. And we're focused on pieces of the virtual reality
experience that aren't in the shipping headsets and aren't maybe even in the next shipping headset
is sort of the cartoon way to say it. And what that means in practice is
we're doing a whole bunch of stuff that can fail, right? We're not on a shipping timeline. We don't
have manufacturing deadlines to hit, that sort of thing. So we're sort of asking questions where
we're not sure if the answer is yes or no, or maybe, or not yet. And so we're doing that for
both virtual reality, obviously, which is the headsets that you can go out and buy now, but we're also starting to explore augmented reality.
And there's various flavors of augmented reality, but the sort that Facebook and Oculus are really interested in are sort of see-through glasses style AR, right?
You put some very small lightweight device on your face and it adds augments to the real world rather than completely replacing it, which is what virtual reality does. the ability of the headset to sort of understand its position in the world and start to understand
the position of other things in the world. The specific algorithm we're working on is called
SLAM, which is Simultaneous Localization and Motion. And if you want sort of a block diagram
like cartoon version of what that is, if you have a wireless headset, let's imagine, let's say it's an augmented
reality headset, you want to be able to use that out in the world. You can want to be able to walk
out of your house and walk down the street and get restaurant recommendations or whatever sort
of overlaid on the real world. You can't have any infrastructure out there that's telling you where
you are, right? You don't have any tracking stations. You don't have any infrastructure out there that's telling you where you are, right? You don't have any tracking stations.
You don't have any cameras that are observing you telling you where you are.
You sort of have to put sensors on the headset that observe the real world and then sort
of make inferences about where you are.
And so in really rough terms, these algorithms basically ingest camera images and accelerometer
data and they spit out position information.
They tell you where you are and where other things in the world are.
So I'm having a blast doing that.
It's not anything like I've done before.
That's obviously nothing like computer games, but it's a lot of fun.
So is your ability to capture,
maybe I'm already going way off into the weeds, but I'm really curious,
if your ability to capture gyroscope and accelerometer information
with compass headings and whatever,
like I've played with some of the nine sensor orientation chip kind of things um well i've read the specs
on it and i own one i haven't played with it yet anyhow um you should they're a lot of fun
i'm sure yeah well that's why i bought one i thought it would be fun is is the information
good enough and i guess this is where you're leading. If I were to have this device and I'm capturing all the information from it and I go walk down the stairs and out the front door,
would you be able to build a 3D model and say this person traveled down so many feet and north direction by so many feet without having a GPS to augment it?
So there's a couple of things in there.
So let's start with just your nine-axis accelerometer.
So are you asking with camera information
or with just the accelerometer?
I was curious about the accelerometer,
but go ahead and fill in whatever details you can.
So let's sort of work our way up
because it sort of explains,
if you look at one of these systems,
there's a reason for everything that's there.
So if we just started with the accelerometer and gyroscope, right? So we just have that measurement
of sort of the derivatives of your motion rather than the motion itself, right? It's telling you
how fast you're turning or how fast you're accelerating. We can, over short time spans,
like a walk down the stairs, be very accurate about the changes in
your heading. Cause we only have to sort of integrate that once we have information about
your rotational velocity with an accelerometer. Um, if you try to infer like a total change in
position after about a second, it becomes wildly inaccurate because you're integrating, you're
integrating twice. And so what happens is you pick up from that first integration some dirt, right? Some just like noise in the system
because none of these devices are perfect. And that accumulates into your velocity. And it does
sort of a random walk and you just wind up zooming off to infinity if you try and infer from only the
accelerometer. And I encourage you to try it out because it'll be really obvious to you when you
play around with this. So then things get complicated because now you have to add some
system that has an absolute reference to it, right? Or at least a stable relative reference.
So for the Oculus VR headsets, we accomplished that with what's called an outside-in tracking
system. So when you go buy one of the headsets, there's a little positional tracker that you mount on the wall or your desk,
somewhere fixed.
And that is a sensor that is observing the headset,
but it's from a known position.
It doesn't drift.
It's got a lower – it takes longer to process your position,
and it is very sensitive.
Your accelerometer works the same no matter where you are in the room.
The farther away you move from that positional sensor,
the lower resolution we have,
or the lower resolution the information about your position is.
So that's how you solve it in VR when you can sort of,
when you're tethered to a computer and you can have outside-in tracking systems,
that works fine.
For AR, it gets much more complicated because now you would put, say, a camera on the headset looking out or even just multiple cameras.
So GPS is an interesting one because that's zero drift.
It works everywhere, but it's also very very low precision so if you needed to do something
like say how far away is this virtual orange from my head gps can't do better than a meter or so
especially indoors it'd be way worse than that so it's not really great at correcting those errors
cameras are really good at correcting those errors um so but the process of extracting a relative
track um you know at observing the changes in your headset position which changes the process of extracting a relative track, observing the changes in your headset position, which changes the image of the stairs as it's moving past you from the point of view of the VR headset, that's why we have PhD computer vision researchers.
It's not an easy problem.
And that process itself has drift to it as well.
So that's why then you have to build up a map of the world
so you can sort of observe, you know, if you imagine turning 360 degrees in a circle,
the camera images when I'm facing forward have nothing to do with the camera images that are
coming into the headset when I'm facing backwards, right? You have to build up a relative track that
gets me all the way there and all the way back. So it's a really, really interesting area of research.
The algorithms are not particularly new.
The ability to do them in real time on devices that are sort of consumer relevant is pretty new.
So, I mean, the Microsoft HoloLens is a great example.
Their tracking system is spectacular and, you know, know fits into it's not a comfortable headset
but it's something you put on your head it's battery powered it's embedded um and it does
ar augmentation really really well so i want to get back to the sdk but since you brought up the
hololens um yeah you know the only ar i'm familiar with was the HoloLens and then the Google Glass, which obviously kind of failed and no one's using those anymore.
Is it something like that that Oculus might be working on or can you not say?
I mean, I guess it's research, so it's all very speculative you're working on.
So luckily, of course, whenever you go and talk publicly, you got to go and clear all your topics. And one of the nice things is that,
you know, Mark Zuckerberg and the lab director for Oculus Research, Michael Abrash, have given
lots of public talks. So anything that's in those talks is fair game because it doesn't get, you
know, much more public than, you know, whatever Mark Zuckerberg says. So in this case, I can't
talk about that. So Google Labs and HoloLens are in the line of what we're
interested in, for sure. Google Glass did not register augmentations against the world. It
was sort of an annotation that hovered in your viewpoint. That's not probably as interesting to
us. We definitely want to be able to register things against the real world, you know, so that
it can recognize things in the real world, tell you things about them, help make you smarter, you know, all of those sorts of sort of knowledge
augmentations. HoloLens is, as I said, super impressive. But for Facebook, Facebook wants
virtual reality and augmented reality to be a part of people's everyday lives.
And if you've seen a HoloLens, you can't really imagine sort of walking out your front door,
wearing it and going to the store with the thing on.
I have had a chance to wear one. It's definitely not something you'd walk around with for a long
time.
Exactly. So, and, you know, there are real sort of plausibility challenges to say, you know,
can we make something that looks like a pair of glasses, you know, or even,
you know, something that's not a pair of glasses, but is socially acceptable, sort of avoids the
problems that Google Glass has that doesn't, you know, that you would wear outside. And we have
an optics team working very hard on some of those challenges. And, you know, there's computer
challenge, there's programming challenges there as well so can you speak to you said the
ultimate goal is you know augmented reality that you would use all the time but like what
kind of information or what what is the goal games is it is it you know what so um
yeah games are definitely part of it um for facebook you know if you think about the
population of people that play computer games versus the population of people who use Facebook, even setting aside characteristics, I mean, they're vastly different orders of magnitude, right? myself included, really believe that the real power of this is to collapse distance,
to enable sort of, you know, non-local social interactions that are meaningful, right? Like,
if I can put these glasses on and feel like in some sense I'm in the same room with someone
that's not right next to me, that collapses social space in a way that's really interesting. If I can put these glasses on and have an asynchronous social interaction
in the same way that, you know, say I like a restaurant on Facebook and it comes back and
then recommends that to my friends when they happen to be in the same town, those are really
interesting interactions as well. And I think that's the sort of thing that Facebook has its sort of long-term
eyes on are these interactions that are going to be meaningful to,
you know,
millions or hundreds of millions,
or even,
you know,
Facebook user scale is billions,
right?
That's,
that's where they've got their eyes on,
on,
on augmented reality.
Wow.
Okay.
I want to interrupt this discussion for just a moment to bring you a word
from our sponsors.
Backtrace is a debugging platform that improves software quality,
reliability,
and support by bringing deep introspection and automation throughout the
software error lifecycle.
Spend less time debugging and reduce your mean time to resolution by using
the first and only platform to combine symbolic debugging,
error aggregation,
and state analysis.
At the time of error, Bactres jumps into action, capturing detailed dumps of application and environmental state.
Bactres then performs automated analysis on process memory and executable code
to classify errors and highlight important signals such as heap corruption, malware, and much more.
This data is aggregated and archived in a centralized object store,
providing your team a single system to investigate errors across your environments.
Join industry leaders like Fastly, Message Systems, and AppNexus that use Backtrace to
modernize their debugging infrastructure. It's free to try, minutes to set up,
fully featured with no commitment necessary. Check them out at backtrace.io slash cppcast well let's go
back to the Oculus SDK
a little bit
for listeners who haven't done any game
dev work which is probably most of our listeners
and are interested in starting
with the Oculus SDK
what's the intro point
and if you are a game developer what is it like to work
with the Oculus SDK compared to the more traditional gaming SDK?
Yeah, so it's an interesting question.
First of all, if you go to oculus.com,
there is just a developers tab,
and you can just sign up and download the SDK.
So I would encourage everyone who's curious
to just go do that,
because you can even just go look at the headers,
even if you don't have the headset,
and start playing around with it um the oculus sdk is narrowly focused on um right so if you think
about something like unity which is a game engine right it tries to you know provide you with a
framework that you can develop a complete game in and And the Oculus SDK is a much more narrow thing.
It's more like an audio SDK or some sort of device-specific SDK.
It's focused on taking the images that your game rendered
and then presenting them to the user inside the headset.
So it's concerned with things like taking the left eye and right eye buffers
and getting them onto the screens at the right time.
So in that sense, it's much more like using,
a little bit like using OpenGL or something like that, right?
It's very focused on sort of the position of your head and how that sort of changes the way
that you render the game world
and then getting those images onto the headset.
So you can't write a full game in the Oculus SDK, but once you have your game, it's pretty easy to sort of take that and then modify
it so that it works in VR. It's kind of an opinionated SDK in the sense that, well, so
we've gone through a lot of revisions of it. So all the way back from the API that we were
shipping with the Kickstarter, which was, we sort of threw a lot more of the choices onto the developer.
Like how do you distort the eye buffers so that, you know,
it compensates for the distortion that is just a reality of the optical system
of the headset.
We take over all of that now.
I think you may be able to still work around it,
but it's not the default path anymore.
And the reason we do that is just that
the reality of VR is that there are so many choices
that you have to make
that can have a negative impact on the user.
Even things like, you know,
the frames in the Oculus Rift
are being presented to the user every 11 milliseconds.
It's 90 frames a second.
And skipping even one of those frames or missing it or presenting it at the wrong time can actually have real physical consequences for people.
You can make them sick.
You can make them nauseous.
You can cause what I call the note reaction, which is where you just reach up and rip off the headset.
And so there's a lot of sort of, you know, we take over a lot of the details just
because it's easier for us to do them than for developers. If we start to understand the system
better, we can upgrade it from that point. So that's the PC side SDK. I don't have as much
experience with the mobile SDK, but that's an, you know, a native Android application SDK that
you can also download that works for things like the Oculus
Gear, the Gear VR headset that we ship with Samsung, or the Oculus Go headset that we
announced at our last Connect.
So if you're a game developer and you're using any sort of game engine, would I be correct
in assuming that the game engine itself might have Oculus support?
Yeah.
So you wouldn't have to rewrite anything yourself
if your game engine is already supporting Oculus.
Correct, yeah.
So Unity and UE4 both ship with Oculus integrations now,
and we work very closely with them
to make sure that those are up to date
and just work right out of the box.
You have to make,
depending on the sort of game you're working on,
you have to make some changes,
possibly even to the design of the game,
but the integration with the engine
should be relatively trivial.
Like a lot of times, you know,
I worked on console and PC games for a long time
and you'd have cinematic scenes
where you would grab the player's viewpoint
and whip it around and show them
exactly what you want to show them.
In VR, you know, if you're not very, very careful,
that's also a very bad idea. You know, if you're not very, very careful, that's also a
very bad idea. If you start grabbing someone's viewpoint and accelerating around, you're going
to make them sick pretty quickly. So it's a mindset change for some people, but it's not
as novel a concept as it was three years ago. When you're working on this and working in the
research, have you had people fall down or vomit or something while testing?
Almost never a reaction that strong.
Almost never.
Well, so unfortunately, I can't be absolute because I don't know everyone that's been
in there.
But I can talk to my own experience.
So I used to work for a while on the tracking for the VR headsets.
And when you're doing that, you put breakpoints in the code
or you have bugs.
And certainly the most disconcerting experience
I've ever had in VR is,
if I hit a breakpoint in my code,
it freezes the world, basically.
And if you're turning your head,
it suddenly feels like someone stuck a big thumbtack
right into your forehead
and the world starts dragging along with you.
So that's the nope moment that I mentioned where you just rip the headset off
so you get used to it after a while but so and the people are more and less sensitive to it as well
so part a lot of the the work that's been done um on the sdk is literally just because we start to
understand those issues better and can mitigate them better. But we also put up big warnings that say, hey, observe yourself, don't push.
And we tell developers these things so they can make the experiences more comfortable
for their user.
It's a huge focus for Oculus.
I'm curious, and I guess I'm taking a step back in the conversation.
Sorry, Rob.
But since we were just talking about the tracking and you're talking about the possibilities of actually making the user ill to some extent,
how much effort has it taken to get that tracking feel as real-time as possible
so the user feels like they're in this and there's no lag as they're turning their head or whatever?
That has gone on for three and a half years. It's still ongoing.
There's a whole computer vision team, um, at the, you know, working on the Oculus product
itself.
That's still working on the tracking for the headset.
That's working on the tracking for new headsets, um, as well as, you know, the work that we're
doing at Oculus research to enable different, different form factors in the device there's always um you
know i talked a little bit about um how accelerometers and gyroscopes are not perfect
devices because they they have noise right they have little stochastic variations or physical
devices so when they measure things they get it a little bit wrong and there's some randomness in
the system right the better you understand the the character of the noise in the system. The better you understand the,
the character of the noise in the system.
And the better you are able to understand the character of the noise in your
other sensors,
like the,
the positional tracker that's out in the world.
There are mathematical techniques that you can use to compensate for that.
You know,
the it's not giving away technical secrets, luckily, because we've talked about it that. You know, the, it's not giving away technical secrets, luckily,
because we've talked about it before. You know, Oculus uses what's called an extended Kalman
filter to do the tracking that to take these multiple input sensor streams and combine them
into an estimate of the user's head position, because that's all we have is an estimate.
And it has some associated uncertainty.
And the better you understand the noise in your sensors, and the better you understand the noise
in your computation, and the better you understand the user's head motion itself, right? What does a
typical user do in the situation? If my head is at rest, well, I happen to be six feet tall. So
let's say six feet off the ground,
at zero velocity, I'm not going to suddenly be upside down three feet off the ground in the next frame. That's really unlikely. So the better you can model the user's behavior as well,
the better your estimates of their true position are and the more comfortable the experience.
That work is always ongoing.
Interesting. the more comfortable the experience that work is always ongoing so interesting yeah um it's it's a lot of fun i will say that's been picking up that mathematical framework has been uh the
most fun for me coming because there's nothing like it in computer games um this is this only
happens when you have all of this sort of noisy observation of the real world then you have to
sort of incorporate that into some sort of programming system.
And it's a lot of fun.
Are you working on that in C++?
Yeah, absolutely.
Oh, wow.
Nice bringing it back to the main topic.
Yeah, so all of this internally is C++.
A lot of the prototyping, you know, when we have people doing mathematical work,
a lot of the first prototypes are done in MATLAB.
But ultimately, like I said, we're putting images up on the headset at 90 frames a second.
We have to estimate the position of your head even faster than that.
We estimate the position of your head at a kilohertz.
So all of that is extremely performance critical.
All that works in C++.
So I am aware that a lot of mathematical stuff is prototyped in MATLAB. Do you have any techniques for porting from MATLAB to C++?
I wish. Not really.
So typically you're not talking about,
it's not like we're porting 7,000 or 8,000 lines of MATLAB.
Typically, the underlying mathematical kernels of these things are maybe 20 or 30 lines of code because MATLAB is super compressed that way.
And we deploy, you know, we use, there's a whole bunch of libraries that make that a lot easier. Mostly at research, we use Eigen for large matrix-to-matrix or matrix-to-vector work
or doing matrix factorizations, which are important for these mathematical frameworks.
So that compresses a lot of it.
That takes a lot of the work out of our hands.
On the Oculus side, they use a library called Tune for doing the same thing,
which is another templated matrix library, which makes it a lot easier.
Okay.
So I think you said that Oculus has been around for about three and a half years, which brings
you, you know, you started off when C++ 11 was already around, if not C++ 14, are you
able to use the most modern versions of the compilers?
Um, so Oculus research has been around for three and a half years.
Oculus was around for another year three and a half years.
Oculus was around for another year before that as a Kickstarter.
Oculus Research was basically founded when Facebook acquired the company.
So that gives you that date.
So we were, for the stuff that we ship to end users,
I think internally they use C++14 constructs,
but the stuff that we export to the end users is, again, sort of suit and tie, very conservative,
because we don't know what compilers they're going to be using.
At Oculus Research, we sort of have much more control over that
because we're not shipping anything to end users.
So we're on G++, whatever.
I'm actually working on Linux rather than Windows,
so we have even a little bit more control there.
So we use C++14 concepts as well.
Although we also have interactions with CUDA compilers,
which sometimes choke a little bit on some of that stuff.
So we try to be conservative there as well.
Are there any specific C++ 11 or 14 features
that you find useful
to get some of this performance-critical work done?
It's an interesting question,
and I thought about it a little bit,
obviously, before coming on the show.
We have a lot of people from the games industry
at Research and Oculus, of course,
and we tend to be pretty conservative
about the abstractions we deploy
because we need to understand everything
that's going on under the hood.
So things like, you know, unique pointer or auto,
some of the stuff that's really easy to understand
all the way down to the disassembly,
all of that stuff gets deployed all the time.
We're just starting to write some
internal stuff that uses, you know, futures and restructuring some multi-threaded code that way,
on my team at least. I can't speak for everywhere else in the organization. One of the interesting
things about research is there's tons and tons of different projects going on and the team sort of
select the appropriate level for themselves. So that's my team. We tend to be, we're, you know,
at C++ 14, we're not taking on board any 17 stuff yet because,
well, for internal reasons, let's just say we have to,
we have to be a little bit conservative because there are some,
some blessed compilers that are important in the Facebook,
in the Facebook ecosystem,
and those, I believe, are 14 right now and not 17.
So we're taking on board the stuff that is useful for sure.
There's, like I said, Unique Pointer and stuff like that.
It's a no-brainer.
Auto works really well for us,
but we're pretty conservative in general.
So you've stated that you work on Linux,
and you've also said that the SDK works
on,
uh,
with many different game engines.
Can,
can you deploy Oculus games on every platform?
Sadly?
No.
Um,
right now it's windows only.
And a lot of that.
Yeah,
I know.
Right.
Um,
the,
well,
so windows plus Android,
let's say,
because the mobile, the mobile platform is really important as well. Um, so the, well, so Windows plus Android, let's say, because the mobile, the mobile platform is really important as well.
Um, so the, those are two different SDKs though.
Um, the Windows only thing was a decision they had to make basically when we were shipping the, the timings of how, um, images get presented to the user are super, super critical.
And right now, it's only on Windows that we have the sort of driver support that lets us control that all the way down sort of to the level of the HDMI scan out.
Interesting. right because if you think about um you know we talked about delaying images by 11 milliseconds
causing the user to feel uncomfortable or that their head motions weren't being immediately
represented in reality right if i turn my head and the world sort of drags behind it by 11 or
22 milliseconds that's really bad it feels very very bad is the only way i can say that and you
can compensate if you know that's happening you can compensate. If you know that's happening,
you can compensate for it by just predicting the head position forward 22 milliseconds.
But that prediction forward is not perfect. It's not the same as knowing exactly where the user's head is. So right now, it's really only on Windows that there's broad support for AMD and NVIDIA and Intel graphics cards
to control it all the way down to that level
so that we know exactly how much latency there is in the system.
Is this an issue that you believe is solvable at some point?
I think so, for sure, yeah.
Certainly, I would love to have Linux support,
but I'm not actually sure what the plans are on that side. That would be something.
The other thing, of course, is that the driver for the headset is only one component of
the ecosystem as well. You know, Oculus has to ship its platform software on Linux. They have to,
which is probably not a significant issue. There's no comparable
sort of, you know, the drivers don't work as well issue there. So we could probably solve that,
but it's another thing. So that would be a decision for the Oculus product team to sort
of assess when that's ready to go. Okay. You mentioned the Oculus mobile SDK, so that's
putting an Android device into the headset, right?
Yeah.
Is that also a C++ SDK?
Is the code you write for the Oculus Rift similar or the same as the code you write for Oculus Mobile?
So, yes.
I think the broad strokes of the SDK are the same, although this is where I'm on really shaky ground because I haven't looked at that SDK in a long time.
You're still mostly concerned with taking a left eye buffer and a right eye buffer and then presenting that to the user in the fastest possible time. And I think everyone,
most people that are writing games for Gear VR definitely do that in the native API, right?
They're writing C++ code right to the metal. They're not using the Java layer of Android. Although I think that may be supported, but probably not recommended
unless you're doing something real. If you're, you know, if you're rendering a couple of
rectangles in the world to just show people video or whatever, maybe that's sufficient.
But if you want to do an actual game, you're definitely going to be in C++. You need access
to all of the hardware and you don't want anything in your way.
I just am curious a little bit
more about the Oculus hardware.
What kind of resolution
is it able to display?
So the Oculus Rift
is,
I mean, in rough terms, it's
probably about 1,000 by 1,000
for each eye.
It's an interesting... So that's just in raw pixels because, you know, you can take these things apart and just look at the screens.
The more interesting metrics for VR is sort of pixels per degree, right, which is not the normal sort of thing you think about when you're thinking about a monitor or a 4k tv or whatever but in this case
you have to take those those thousand pixels horizontally let's just sort of arbitrarily say
and you have to spread them over 90 degrees or 100 degrees in order to create a large field of view
right and if you do the division there um you know and sort of ignore optical aberrations or whatever, you're talking,
you know, roughly 10 pixels per degree or five cycles per degree, which if you go to an
optometrist and say, you know, at normal distances, I can see, you know, five transitions from white
to black per degree, they will tell you that you need vision correction or you're not allowed to get in a car, right? So all of the headsets that are out there right now are,
they're amazing.
They provide people with really cool experiences.
I mean, they were magical enough to make me change careers,
but they're really, really far from sort of the limits
of the human perceptual system.
And so part of that is we're just tied
to the screen technology that was available i mean no
one no one needs a 10 000 by 10 000 pixel phone screen because you hold it at arm's length that
would be a complete waste so no one's made those and so vr is just getting to the point where we
can start to drive screen technology to start to drive um you know drive the the panels and display options that are available towards things
that are more appropriate for fitting up against the human perceptual system so the gear vr just
to give the other number is in the same it's roughly in the same class i think it has a little
bit more horizontal resolution just because the the samsung the samsung phone is is really really
um high res on its long
axis, but it's in the same
general class. And that's the
same thing for the Vive or the
PlayStation VR. They're sort of in that same
general resolution class.
I was just thinking from your conversation
about augmented reality at the beginning,
like,
I could be wrong, but it feels like augmented
reality having high resolution seems to feel more important to me.
It is.
Yeah, so, yeah, if you have low-res pixelated augments that are sort of registered against the very high resolution, I mean, the world is always at perfect resolution, right?
Right.
It has zero latency.
It has zero latency. It has zero drift. It doesn't ever move unless you push on it, which is really inconvenient for us because we have to sort of get to that fidelity level before things get interesting.
You know, the interesting thing about augmented reality is, you know, the level of optical challenge there is orders of magnitude more complicated than virtual reality.
Virtual reality right now, all of the headsets, if you go to iFixit or whatever, and you look at the teardowns of the Vive or the Rift or the PlayStation VR, the optical architecture
for all of them, if you just wrote it on a Post-it note, would be magnifying lens in
front of cell phone screen.
That's all of them right now.
And so if you try to get to a system where you have to steer photons into the eye
in a way that makes you perceive objects in the real world,
but you have to also let the real world through at the same time,
we can't do magnifying lens in front of cell phone screens anymore.
So the optical challenge there is really, really hard.
And if you look at the Microsoft HoloLens, they're doing it with waveguides, which is an amazing technology.
But, you know, the HoloLens is actually really, really high res.
It's just a very low field of view.
So that's kind of how they cracked that nut.
Yeah, I've tried the hollow lens
and when you put it on you kind of get instantly a little bit disappointed with the field of view
it's just not what you would expect from like watching someone else use it yep they they put
out those amazing videos right which sort of you put it on and you're like okay i understand why
this is this way but you know you expected the huge globe that you could see the whole thing.
And we're just not there yet.
The problems of optics haven't been solved in that domain.
Which is why we have a huge team of really, really smart optical scientists and research working on those problems.
Okay.
Well, is there anything else you wanted to share before we let you go?
Is there like an Oculus Research dev blog or anything like that the listeners could look at uh there are uh i would
encourage everyone if you're interested in any of this stuff i would say a couple of things um
first of all if you search for michael abrash vr talk whatever on your favorite search engine
you'll see the director of Oculus Research,
Michael Abrash,
obviously a very well-known games programmer,
goes into a lot more detail
than I did on a lot of these things
and sort of covers more broadly
what we're working on.
And then the other thing I would say is
if you would like to help,
we are always looking for strong C++ programmers
at research. So if I can plug that, I don't know if that's allowed on your show we are always looking for strong C++ programmers at research. So if I
can plug that, I don't know if that's allowed on your show, but like if any of that sounds
interesting to you, drop us a line and you can just find us at oculus.com. Okay. Well,
thank you so much for your time today, Dave. No, absolutely. I had a blast. Yeah. Thanks for
joining us. and follow cppcast on twitter you can also follow me at rob w irving and jason at left kiss on
twitter and of course you can find all that info and the show notes on the podcast website