CppCast - 5G Network Computing
Episode Date: February 11, 2022Yacob Cohen-Arazi joins Rob and Jason. They first talk about an update to Microsoft's GSL library and the upcoming LLVM v14. Then they talk to Kobi about work he's done at Qualcomm with 5G networks an...d how 5G is about a lot more then just bandwidth improvements. News Nerd Talk - Doug McIlroy & Brian Kernighan GSL 4.0.0 is Available Now gsl-lite I don't know which container to use (and at this point I'm too afraid to ask) LLVM/Clang 14 ends Feature Development with better C++20 support, Armv9 Links San Diego C++ Qualcomm Careers Sponsors Use code JetBrainsForCppCast during checkout at JetBrains.com for a 25% discount
Transcript
Discussion (0)
Episode 336 of CppCast with guest Jakob Koenrazi, recorded February 9th, 2022.
This episode of CppCast is sponsored by JetBrains.
JetBrains has a range of C++ IDEs to help you avoid the typical pitfalls and headaches
that are often associated with coding in C++.
Exclusively for CppCast, JetBrains is offering a 25% discount for purchasing or renewing
a yearly individual license on the C++ tool of your choice, Cion, ReSharper, C++, or AppCode.
Use the coupon code CHEPBRAINS for CppCast
during checkout at www.chepbrains.com. In this episode, we talk about updates to GSL and Clang.
And we talk to Jakob Konarazi from Qualcomm.
Jakob talks to us about 5G networks and edge computing.
Welcome to episode 336 of CppCast,
the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
All right, Rob, How are you doing?
Doing okay.
Do you have anything you want to share before we get started today?
I think I'll do a random teaser for my YouTube channel.
I've had one problem that's really been annoying me for the last five years with Constuxpr.
And I found a solution, finally, after five years. It took five years, the input from three different people,
and a new standard CLS Plus standard version for me to find a solution that I like.
I recorded an episode about that.
It'll be going live in three weeks from when this airs.
Okay, so we have to wait a bit longer for it then.
Yeah, I've got next week's already queued up.
The one after that is a precursor to the one that I just sneak peeked.
So you want to watch the precursor as well.
So that's going to be, what days do you usually drop episodes?
Always on Monday morning, my time.
So we're talking February 28th, last Monday.
Sounds believable.
Okay.
Yeah, that sounds about right.
Yeah, but five years in the making.
And I think it will be like on the order of like the second longest video I've ever put up.
If you don't include live streams.
Well, I look forward to seeing that and seeing what the solution is.
All right.
Well, at the top of every episode, I'd like to read a piece of feedback.
We got a couple tweets last week's episode.
This one's from Elliot Barla saying, delightful episode.
Brian Kernaghan is a treasure.
And he certainly was.
It was really great talking to him last week.
Yeah, we got a ton of retweets and stuff on that one.
Yeah.
And I also wanted to just mention,
it was another Twitter user
who just DMed the CppCast account
suggesting we have Brian on.
And when he said that, he wrote,
it was probably asked
a million times, but Brian Kernaghan still does very interesting talks and videos. And he actually
sent me a YouTube link to something Brian had done recently. And he has so many good experiences in
computer science and operating systems. And it was not a millionth time that we had gotten Brian as
a suggestion. It was the first time as far as I remember, but it was really
easy to get in touch with him and he responded right away and it was great having him on. So,
you know, don't assume that someone you would like to hear on the show has been suggested a
million times and we can't get in touch with them. You know, there's plenty of people out there who
just haven't been suggested yet and would be more than willing to come on the show.
It's also just, I think, worth noting that people that you think of like,
oh, I'm sure they're completely unapproachable because they're like the, I don't know,
most amazing famous person or whatever in the community.
So many of them are just like, oh, sure, why not? I have time for a podcast.
And some of them, like Brian, apparently do lots of other stuff. Like if you search for him on YouTube, he has a ton of other videos of interviews and things like that that he's done.
That's fun.
All right.
Well, we'd love to hear your thoughts about the show.
You can always reach out to us on Facebook, Twitter, or email us at feedback at cppcast.com.
And don't forget to leave us a review on iTunes or subscribe on YouTube.
Joining us today is Jakob Konrazi. Kobi is a principal software engineer in Qualcomm,
San Diego, where he works on the next generation of 5G cellular and radio access networks.
In the past, he's also worked on machine learning, automotive, Wi-Fi, and 4G domains.
Kobi is currently working with Dorothy Kirk on an upcoming new C++ object-oriented book and
helping Dorothy to modernize C++ code. He's been running the San Diego C++ meetup for the past three years where he presents various topics that hopefully help others write
better C++ code. He holds a bachelor in computer science from the Academic College of Tel Aviv,
Yafo, and Kobe likes cycling and swimming as well as taking Pilates classes at his wife's studio.
Kobe, welcome back to the show. Hey, Rob, Jason, thank you for having me again.
Yeah, so it's worth mentioning here, this is the first time we've ever done this, right?
Obviously, we've had repeat guests before, but never maybe this close together.
Never this close together.
We had Kobe on two weeks ago, and we never actually got to the topic that he most wanted to talk about.
So, we have him back.
But the astute listener here may have noticed that you changed your bio in the last two weeks.
Yeah.
So my wife said, you mentioned the kids.
You mentioned dogs.
How about me?
So here I mentioned, yeah, Ellen, I hope she enjoyed the show.
If I remember correctly, I think your dog had
a normal
name, like a woman's name.
Julia. I hope
listeners weren't confused.
No.
And she's named
over
the programming language.
Oh, interesting.
Should have named her R instead.
Okay.
Does your wife run her own Pilates studio then?
Yeah, yes, yes.
Oh, that's cool.
I assume that business was affected a lot
over the last two years.
Yeah, she's also a project manager
in a company, a political domain.
So it's not her only
gig. She's doing other stuff.
So I have my own spot
on Sunday. Nice.
Oh my goodness. Last time I tried to do
Pilates was before I had started
working out at all and I did not have
the core strength for that. I think I
could probably do it now, but
12 years ago, no.
It's very challenging.
All right.
So, Kobe, we got a couple
news articles to discuss.
Feel free to comment on any of these.
Then we'll start talking more
about the work you do at Qualcomm, okay?
Okay.
All right.
So this first one is an article
on Microsoft's C++ blog.
And this is GSL 4.0 is available now.
And this is Microsoft's C++ Core Guideline Support Library.
And yeah, they released a new version.
There's a bunch of notes here about they've deprecated certain things,
changed the format of the files.
They don't all have GSL in the name anymore.
That did seem redundantly redundant.
Yeah, they used to be like GSL slash GSL span or something like that.
So now it'll be just GSL slash span.
I don't think I've ever gone ahead and tried using the library.
I do see that they're able to use it on GCC and Clang, though, which is really nice.
It's not just targeting Microsoft C++ compiler.
Yeah, I think it's been supporting all three major vendors for a while. Do you use it, Kobi?
I use GSLite, and mainly because of two things. One is I wanted to have span. And again, GSLite
is kind of a lightweight single header. And finally, it's another thing. So basically,
two things. It is hosted on Conan so it's
easy to pick up if you have Conan
but it's a great addition if you need
a few things that you can't have
maybe because you're not on
a specific standard library version
yet. Right, and it's worth mentioning
GSL Lite if you're stuck
using an earlier version of C++
like C++98 you can use
GSL Lite. Exactly, yeah. I have to admit,
I've never actually used the GSL myself, although when certain topics come up when I'm teaching a
class, I might tell my students, oh, well, if you want that feature, you can use a lightweight
wrapper, like a null pointer or whatever, and I'll point them at the GSL. So I've recommended it many
times, but I've never actually used it. Next thing we have is an article on Belay the C++.
And this is, I don't know which container to use.
And at this point, I'm too afraid to ask.
Nice meme reference.
And it's just a nice article going over the different standard containers,
saying kind of by default, if you want a sequence container,
you should go with vector. If you want associative, you should go with map. But if you do need more performance or
other specific criteria kind of going into when you should use some of the other container options.
I feel like there's something missing from these tables. It doesn't ask what the access pattern is going to be like if once the data like maybe you have
to do a bunch of inserting all over the place whatever to build your data set then when you're
done you're always just iterating over the data like that happens in real code you build up
something when you load an input file,
you get it all organized, and then you need to use it when you run your simulation or whatever.
So if that's the case where access time is more important than setup time, then vector
always wins because it's got the cache in its favor.
Yeah, that's a good point.
You have any thoughts on this one, Kobi?
It's a great blog. I really liked reading it.
I'm almost always reaching out to std vector
and maybe std array if everything is known in compile time.
I've seen all kinds of tweets saying
you should never use stdmap and similar.
I kind of disagree.
I think every container can be used in all kinds of scenarios.
It really depends on the use case.
It depends on what you want to do.
Obviously, in some environments,
stdmap, for example, would not work well
and you need a cache-friendly one like
Vector or Ray. It's a very good summary.
I think people just
need to stare at the table and
realize what needs to be
used and when.
I really like that summary.
I think I should clarify
too, I'm just talking about
access times when I'm making my argument.
There certainly exists a world, and to Kobe's point, you use too like i'm just talking about access times when i'm making my argument right there certainly
exists a world and to kobe's point like you use the list because that's the thing that makes sense
while you're building the data so you can get the best of both worlds too you can be like build the
data up with list and then just do a quick copy into a vector so that you've got fast access times
for the rest of the program life or whatever. There's definitely use the thing that makes sense to use,
but I just wish that the article had mentioned
cash-friendliness for access a little bit.
I missed that part if it did say that.
Okay, and then the last thing we have is a blog post on pharonix.com,
and this is LVM Clang 14 ends feature development with better c++ 20 support
and arm v9 added and i mean that headline kind of gives you all the information from the article
there's a couple other things going on here the clang v14 development is essentially done there
should be a pre-release build out very soon but right now they're just going to be focusing on bug fixes
for that release before it officially goes out.
But I think they have coroutine support working now.
Yeah, they say also the C++20 format header is also in there.
Oh, I missed that.
Yeah, because I was looking to see which C++20 features
actually got added in 14. Bitcast format. Where do you see
Bitcast being mentioned? I went to CVP reference to try to drill down into the details here. It
looks like spaceship operator in usage in the standard library. That's a fascinating one.
If either one of you noticed that, if you go and look at the standard the spaceship operator
actually removed like thousands of lines of text from the standard because like string had like
20 different comparison operators explicitly mentioned and now it's just replaced with
two spaceship operators or whatever so anyhow so i guess 14 also merged that in. So, Kobi, when we had you on last time,
we talked a lot about using Docker for development,
and we never actually got around to talking about the work you do at Qualcomm.
So could you maybe start off by just giving us a little bit of an overview
about the C++ work you do at Qualcomm?
Yes, sure. And a small disclaimer,
a lot of things that I'm going to
mention here, kind of my opinion about 5G, the industry, the trends. It's a very interesting
domain. The scope is huge. So I'll talk about some of it, not everything. That's my disclaimer.
It's not as big as the CPP chat disclaimer, which is a huge one.
You know, the first time I was listening to the CPP chat,
I was listening to John.
I was like, what's happening here?
Like, what is he saying?
It was a long time ago.
I was young and I was kind of naive.
So thank you again for the opportunity. In Qualcomm, I was, in the past few years, I was looking at two different domains.
One is XR in the context of 5G.
And another thing that I'm looking at right now is private networks, again, in the context of 5G.
Let's start with XR.
So XR is a fascinating domain that 5G and I'll slowly kind of trickle all kinds of advantages of 5G versus 4G.
Like what is new in the standard?
Why is it such a big deal?
Why is it enabling so many things?
It's not only mobile, it's beyond.
I'll talk about automotive and mission critical scenarios.
But also entertainment.
So XR and gaming.
Though XR is not only entertainment,
there's other applications
for remote control
and similar that XR can
help. Can we get a definition
on XR? Oh, yeah.
Sure.
We take VR, we take AR.
Instead of saying AR, VR, we say just
XR.
So augmented reality and virtual.
So virtual reality is basically the entire world is virtual.
Augmented reality, we take the real world.
We augment some things on top of that.
And there's a lot of use cases for augmented reality all day.
Glasses that we hear that are planned in the future,
you can walk with glasses and have information displayed on the glasses.
Again, this is where 5G would shine.
Obviously, 5G is the multiple releases that we keep innovating.
We keep adding features, new releases, and it's not stopped at 5G.
There are other releases that are in the works.
So XR is an interesting use case.
Traditionally, you would have your own VR or AR headset that has everything, all the compute,
is encapsulated in a single device.
Now, there are multiple issues with that.
For example, battery is an issue.
How much compute can you have?
You can't have like a huge GPU in a small device.
So what do you do?
This is where we need to have a better connectivity,
better responsiveness, better latency,
and you split the compute between two things.
One is the mobile and another one is kind of an edge compute.
So that's where you hear a lot about edge computing.
And 5G is basically, it's like a bus.
It's like, think about like a PCI bus that can connect a very strong compute that lives on the edge of the network and your mobile.
Now, it's very challenging, the whole system,
because our brain, it's really hard to trick.
It's really hard to trick how we see things.
When we move in the virtual world, everything needs to be rendered very, very fast.
So the headset, the device that you wear,
would be basically capturing your movements,
what are you doing,
and this is transmitted to the network.
The network needs to render a new frame,
and this frame needs to come back.
This round trip is very crucial
to be happening very, very fast. And
it requires a very low, you need to have a low latency and a very responsive network in order
to achieve such requirements. And 5G, one of the things that actually are different from 4G. So in 4G, we basically had bigger bandwidth,
like higher spectrum and so on and so forth.
So we monetize on that.
In 5G, we have responsiveness.
We have more capacity of users.
We have more bandwidth.
So from, let's say, 20 megahertz,
we have 100 megahertz,
and this can be even more if you aggregate spectrum
and you get more than 100 MHz. Basically, you have
both the responsiveness, which is a new thing that is
baked into the standard. The standard is huge, by the way. If you thought that
C++ standard is huge. I was going to say, compared to C++20.
I think, again, it's pretty amazing the amount of innovation that
is put into this protocol.
And there's so many scenarios.
There are so many things and knobs that you can change in the network in order to address a specific use case, a specific spectrum, a specific deployment. So in this XR use case, we are able to split the computing between network and the
device. And if you look at it, we achieve a high capabilities of a system that used to be
possible only if you had everything in a very expensive device
that you need to kind of wear.
And this is a huge thing,
and we demonstrated that in Mobile World Congress
at least last year.
And I'm not on the XR project anymore.
I'm in the private networks.
But innovation, improving the user experience
is ongoing in R&D and production, which is pretty amazing what is happening right now.
I just want to pause for a moment, make sure I understand kind of the difference between 5G and previous technologies like 4G and 3G.
Because when I hear that, you know, as just a normal person with a cell phone in his pocket, I thought that kind of going from 3G to 4G to 5G
is just about bandwidth.
But it sounds like you're saying there's a lot more to it
about there being edge computing
and being able to do things like this XR computing
at some point on the edge of the network.
And is that kind of what we're talking about?
Yes, yes.
Thank you for bringing this up because there is a huge difference between what 4G enables and what 5G enables.
So again, 4G was mostly about, it's a higher bandwidth, but 5G is addressing other markets that we can achieve with 4G.
Let's take autonomous driving, self-driving cars.
This domain, this specific use case,
requires very fast and very responsive network.
It needs to be reliable, resilient.
If a car needs to communicate with a pedestrian,
needs to connect to another car to connect to another car to
communicate with another car you can't really have the traditional oh i'm gonna wait for my time
and when my time comes i can transmit so the way maybe maybe a pose and explain about wireless
networks it can be cellular or can be either Bluetooth, Wi-Fi.
You have the air interface.
It's a protocol.
It's a protocol that both ends agree on.
And in a cellular network,
there is a protocol that says,
like, I'm a network,
and I'm going to give the device
a chunk of time
and a chunk of spectrum in order to transmit.
And every device has its turn to transmit and receive data.
It's very, very organized.
Like traditional Wi-Fi, Wi-Fi 6 changed it a bit,
but traditional Wi-Fi is pretty much different.
Users can try to transmit.
If the air interface is free, they can transmit.
But it's not really organized as a cellular.
We're kind of simplifying things just to give you the sense of it.
5G was designed from the get-go, from the initial draft,
in order to address these use cases of automotive factories,
which is basically IoT,
instead of having just maybe a single quality of service
or maybe a couple of them that weren't much used back in 4G,
we have a variety of kind of a quality of service.
From, I just want to download something,
I want to tweet something to ultra reliable, low latency communication where a car can say,
okay, I need to transmit something to another car. I can't wait for my turn. It is designed to have
subset of the devices with a very high priority in the network in order to transmit.
And it's designed in a way that even if other devices are talking at the same time, the ultra-reliable low-latency device would have priority and resiliency getting the message across from one device to another device.
So think about drones, think about automotives, think about factories. So today we are looking
at private networks to deploy in factories. So the industry is moving into this direction that instead of wiring things in Ethernet,
you have 5G, which is deterministic.
You have, let's say, a one millisecond, if you need a one millisecond latency guaranteed,
you would have that.
This is something that previous protocols, previous generation didn't have. We needed to develop a new protocol to address markets that
we couldn't get to in the previous standard. Now, think about first responders. Think about
military disaster area where you need to deploy a network quickly and have this quick communication between,
quick and reliable communication between the parties.
This is something that was designed in 5G in order to address that.
It's a huge difference between 5G and 4G.
It's not only, oh, I have this millimeter wave.
So I have basically XR would be deployed, let's say, with a millimeter wave.
It's something that didn't have in 4G.
It's not only the bandwidth, it's also the latency,
the responsiveness of the network.
It's a huge thing that we have right now in 5G.
I'm curious about something you mentioned a moment ago.
You said they're even moving in the direction of across a factory
having 5G devices instead of wiring.
And that just sounds confusing to me.
Like if you had the ability to wire,
it sounds like that would be more reliable and consistent
than something that could possibly have outside interference.
Yes.
So a lot of factories,
they have a requirement to move things around very frequently and rewire things. And it
becomes a mess to do that. So if I can guarantee something that is as good as Ethernet, and there
is a requirement to move things around and you need mobility, Let's say you have robots moving around. How do you do mobility?
You need communication that is reliable and that proves itself. And 5G is what we are looking at
for such deployment. And it has higher reliability than you were saying, than like Wi-Fi 6 or
whatever today too. Yes. 5G has reliable low latency that can be used in various scenarios.
It is very reliable compared to other.
It is a licensed spectrum.
It's a big thing, so only licensed can actually use it.
Wi-Fi is an unlicensed spectrum.
Now, Wi-Fi has its own benefit and its own usage.
I'm not saying that Wi-Fi is not capable of doing similar things,
but there are some
things that are easier to be done with 5G with a specific spectrum and similar.
Yeah, I was just going to ask to clarify, I'm not going to set up a Wi-Fi, excuse me,
a 5G router from my second floor down to my basement to get higher speed down here.
That's not a thing, right? Okay.
Yeah, yeah. It's pretty much,
you'll see that in enterprise solutions,
there is already offering of 5G as a backhaul.
So instead of having your cable,
you'll have a 5G backhaul from your home.
Inside, you would have either Wi-Fi or Ethernet,
but the backhaul is 5G.
We also see all kinds of use cases where you have subscription of 5G in cars.
So that's another thing because of the bandwidth and the streaming capabilities.
It's not being very useful and possible in doing such a kind of subscription of television,
movies, and so on and so forth inside cars.
Can you put this in terms that maybe we would all get
how many gigabits per second or whatever are capable with 5G?
So here's the thing.
You can use your favorite search engine.
I heard it was a few days ago that I think it was millimeter wave,
but I can't remember exactly.
I think 8 gig that they're hitting right now.
But if you Google for that, you'll see people walking around and there's testing in the field,
like things that are deployed and there's all kinds of testing in demos that like the new
generation and maybe tweaks of the protocol for the new releases. But it's definitely gigs of bits per second. Now,
there are all kinds of deployment
of 5G in different spectrums.
For example, I read that
T-Mob has, I think, 600
megahertz spectrum,
which means it might be maybe
a lower speed, like maybe
150 meg, but
longer distance.
So the higher the spectrum,
so let's say we're mostly looking at either sub-6 or millimeter wave.
So millimeter wave, you probably understand that it has a short range,
but a higher speed because slots are really small
and you can pack more bits into time and spectrum.
But if you want to deploy
5G in the lower side of the spectrum, you can still achieve higher capacity, higher bandwidth,
but kind of a longer range. There is no one single, and if you look at the carriers in US,
especially in US, because that's the side looking at US, but there's also in other countries.
So T-Mobile would have a specific spectrum and
AT&T would have a different spectrum and so
on and so forth. So I'm just
reading articles and every day
there would be some new thing that
I would learn. It's a huge
area of interest
there's a huge scope and there's a lot
of things happening and I know
some but not everything that happens in this domain.
So when you're talking about, if you don't mind,
get back a little bit to the factory devices or whatever,
like the private network thing that you've mentioned a couple of times.
If you want a private network,
are you talking about an organization that's large enough
to be able to license their chunk of spectrum and do that just for their own thing?
Or are you talking about like, well, I'm going to set up my own private network that runs over 5G, but I'm using T-Mobile as my carrier for this?
Yeah, there are multiple ways to do that.
There are spectrums that are, I would say, like local without going into specific details. There are spectrums
that are allowed. Again, you need to go to a specific body and say, in this area, I want to
radiate. And you get the authorization to do that. And here you go, you get the specific chunk of the
spectrum that is kind of a common consumer one
that you are allowed to do that.
Again, it's not like you can go and deploy your own
and start Radiate.
You do need to request for a specific chunk of the spectrum.
There's all kinds of spectrum chunks that are military,
so you don't want to collide.
You don't want to conflict with...
So again, it's organized.
It's not like, oh, I have my Wi-Fi access point
that I just purchased and I bring it up.
And yeah, my neighbor would be on the same channel.
So, you know, 5 gigahertz and 2.4,
they have multiple channels.
So you might collide.
And a tip for the listeners,
if you want to improve your Wi-Fi,
make sure that you deploy, you radiate on a different channel than your neighbor, especially in a dense environment.
It makes a huge difference than the packets are not erased and they're not colliding.
So yeah, you can't like go and just radiate 5G whenever and it's organized.
It's much more organized. Some 80s movie and I can't recall the name of it right now,
but it's all about this one kid who's got a pirate radio station
and he spends like the whole movie trying to avoid the feds,
being able to triangulate where he is.
He's like broadcasting from his van.
It just reminded me of that.
I'm sorry.
I'm sure there's some listener here that knows what movie I'm talking about.
Wonderful discussion for just a moment to bring you words from our sponsor
sea lion is a smart cross-platform ide for c and c++ by jet brains it understands all the tricky
parts of modern c++ and integrates with essential tools from the c++ ecosystem like cmake clang
tools unit testing frameworks sanitizers profilers oxygen and many others sea lion runs its code
analysis to detect unused and unreachable
code, dangling pointers, missing typecasts,
no matching function overloads, and many other issues.
They're detected instantly as you type
and can be fixed with a touch of a button,
while the IDE correctly handles the changes
throughout the project.
No matter what you're involved in,
embedded development, CUDA, or Qt,
you'll find specialized support for it.
You can run debug your apps locally, remotely, or in a microcontroller as well as benefit from the
collaborative development service. Download the trial version and learn more at jb.gg slash cppcast
dash CLion. Use the coupon code jetbrains for cppcast during checkout for a 25% discount off
the price of a yearly individual license.
I want to kind of try to bring it back a little bit more to C++.
And it sounds like kind of the new opportunity with 5G for C++ developers is working on these edge computing projects that you mentioned, like XR.
So can I understand a little bit more about what one of these projects would look like?
I mean, where exactly would the server running some XR code be running?
It's not just in some server farm.
Is it somewhere closer to the actual satellite tower
where the 5G network is being hosted, broadcasted from?
Yeah, so for XR, we have a deployment of an indoor,
just to demonstrate,
let's say you have an indoor deployment of an XR system.
So you would have the computers that run the protocol stack.
So basically, a wireless network would take air
and create an IP packets, for example.
And from IP packets, it's all the way up to a regular protocol that you are familiar with.
It's either UDP or TCP or anything else that you need in order to transmit packets from
one end to another end.
So you would have the wireless protocol system, let's say, in one box, and it connects very
high-speed Ethernet cables
to another box that would run a GPU phone.
And so basically it's a game engine.
So it's a game engine that renders your game.
The game engine doesn't really know
that there is a split rendering.
And the phrase split rendering is because
most of the rendering is happening on the GPU,
but there is some small amount of rendering happening on the device itself.
A lot of it is done in C++, by the way.
And why do we need a small amount of rendering? To simplify things, I would say that while you transmit information to the server, to the gaming server, so it can render the scene, user might still move around.
And you need to adapt to the new movement
in order for your brain to kind of not have this nausea
because you moved and you need to see the new scene.
So there are some tricks to have an additional rendering
in addition to the
major one that happened on the edge. So if you try to imagine you have the antenna, you're
connected to a box that runs the protocol stack of the wireless system. We have IP packets that
coming back and forth from the wireless system that connects to a gaming server that is sitting very close.
Now, when I worked on this XR system,
my team and I, we were working on improvement
of the whole user experience.
I just want to take this opportunity.
We needed to create a new C++-based system.
When we picked C++ 17, for that matter,
it was a couple of years ago.
I just want to mention our great experience having...
It was a greenfield project, not always possible.
Usually you have an already...
kind of something that you already need to work with,
a system, a C++ code that you already need to work with.
But this was kind of a green field,
kind of got lucky.
And we decided as a goal,
and I worked with amazing engineers with me,
and it was a kind of a combined effort
with another team.
And the other team lead was,
is amazing, a basic person to work with.
So we had similar goals that we are actually going to have a SIPA Pass 17 that we picked,
project that from the ground up will have the best practices.
So Jason, we took your startup project and it's very helpful.
I've gotten it just for the sake of i wanted to give a quick
shout out that i've gotten a ton of help from a bunch of contributors and i need to do another
major announcement about it because it almost looks completely different from how it did i mean
just as useful but now more reusable anyhow yeah so we decided that we are going to have everything literally by the book. And it was an amazing experience. I know there's a lot of questions and doubts about can we have C++ in such a high demanding environment like 5G and XR? And the answer is yes. We trained people that came from
kind of the old non-modern C++.
This was one of the challenges.
And everything, after a few months,
everything was done amazingly great.
And things like using algorithms
accumulate as well.
I'm sorry, I'm only laughing
because I know that we've been involved
in the same Twitter thread over the last few days.
And I'll just leave it at that.
Yeah, and things that C++17 gave us,
which are optional and variant for...
Stute variant is a great helper for state machines.
Yeah, a lot of people don't like variant.
They think it gives too much compile time
and in some cases too much
runtime overhead. But I mean, I've watched it just go away in constexpr code, of course, but
you're saying you had successful deployment in Hive? Yeah, we measured everything. So our CI
and test, we were measuring it every time. So everything that we add and every abstraction that we use,
we made sure that we know
what we pay for.
And we had a very big success
working with C++17.
Again, it's a Greenfield project.
A lot of training happened initially,
but using sanitizers,
using all the tools
from Clang Tidy
to everything else that you can imagine,
just to make sure that we are...
And it was a great experience.
We reached the goal of delivering the system, which is in the context of 5G and XR.
But we also created a very resilient C++ code.
Visually, almost no, like, there was no such thing as no bugs,
but think about systems that have memory bugs
and things like that.
I don't remember the last time we had memory,
like I had memory bugs
and using more than C++.
So I had this small exercise
because I was learning Rust on the side
and I said, okay, I'll take the system
and I'll try to rewrite it in Rust.
And we had everything done so nicely
in terms of memory ownership
that it was very easy for me
to kind of convey the same idea
and the same system in Rust.
So just to kind of conclude,
there is a room for C++,
more than C++, in a high kind of demanding system
that needs low latency.
And obviously, you need to measure everything
and consciously do add features,
but it is definitely doable and readable
and very easy to maintain.
Obviously, we had unit tests all over the place
and again, fuzzers and static analyzers
and everything that you can think of.
The only thing that I'm kind of regretting
not using is Conan.
Back then, two years ago,
I wasn't really into understanding Conan all the way.
So we used submodels,
which not really my favorite mechanism.
We couldn't use Monorepo back then, so we needed to separate
a few third parties into some modules.
I would definitely pick Conan if I need to. That's the only thing that I would do again
for change if I need to redo the whole thing.
Everything else was done very nicely and pretty much
by the book and great results.
Are you currently using Conan in any of your commercial work projects?
So I'm starting adding Conan and this kind of brings me back to the private networks,
which I'm working right now.
So I am migrating all kinds of dependencies into Conan.
I said it last time, I pretty much believe in a package manager versus
getting things
built as part of the tree. It improves a lot
of compile time and
the tree is less cluttered
by using package manager. I didn't
try VC package, I know there's another
one, but Conan is what I
currently looking at and using.
And it's working well so far?
Yes, it's working well.
There's a lot of knobs, there's a lot of things that you need to be aware of.
But by the way, the Conan folks, they are very responsive.
They help a lot if you have any questions.
They are very responsive and they help.
And I'm really happy working with them if I have any.
If I have any issue, they basically resolve it very, very fast.
It's been a really long time since we've had anyone from Conan on to give us like a
State of the Union address here.
Yeah, there's this 2.0 that is coming up very soon, right? So that's something that I'm
definitely looking forward.
Sounds like that'd be a good time to have Mon again.
If you want to have a quick discussion on private networks, I can...
Yeah, go for it. There's a lot of interesting things happening today in the industry
related to private networks.
And not only private networks,
the way the industry is going in terms of how the cellular network is built
is changing, and 5G kind of enabled that.
And I'm sorry, I would need to mention Docker again at some point.
It's your fault this time.
Okay, here's the thing.
Traditionally, what we had,
let's say you would drive on the highway
and you see all these cell towers.
Usually, what you would have in this cell tower
is everything is kind of a self-contained network.
So you have, if you look at the base of the cell towers, they have kind of a self-contained network. So you have, if you look at the base of
the cell towers, they have kind of small rooms where the equipment is there. And beside the
antenna, if you look at the antenna on the top of the tower, there's a lot of equipment that needs
to run the stack. And then you have Ethernet and other fibers and similar to the backhaul. Now, there are all kinds of use cases
where you deploy something
and you don't really need the entire compute power
sitting there at the base station
or the base of the cell tower.
Let's take highway,
that people commute back and forth from work. So if you look
at the usage peak, it might be, we need a lot of compute, a lot of data happening when people
commute to work. And then there's some kind of a downtime, not a lot of compute is needed, and then people are coming back for more.
So again, there is a peak.
Now, imagine a way where you can say,
okay, I don't really need all this compute power.
I don't need to deploy all this compute power
in very close to the base station or to the antenna.
I can actually have some of the compute power
in some pooling away from it.
And then I can share the resources among multiple cells.
So the industry today is going towards
kind of splitting the stack into multiple components
that each of them can be hosted on a different location
along kind of the route of where the packets are traveling
from the antenna to the antenna.
And this is kind of related also to private network
because private network is kind of a smaller scale of a macro cell,
but it has the same idea of having this whole network stack
split it into smaller components.
And believe it or not,
the way you actually start deploying all of that
is with virtualization, with Docker.
So there is a lot of work done
and there is a body named Open RAN
that is kind of in charge of specifying
all kinds of interfaces
between the various parts of the stack.
So before it was one monolithic, kind of a monolithic stack, monolithic maybe process,
maybe a couple of processes.
But what we are looking at right now, the trend is that we can split these processing
apps into multiple of them.
You can now have something more elastic, scalable.
You can host it in all kinds of places in the cloud
and have pooling.
So what does it mean?
A few things.
First of all, you need less equipment
at the base station area.
And you can use something in the cloud
in order to use the compute power over there.
Since you start having this split of the stack
into multiple smaller components,
you are letting other vendors
that weren't part of the whole stack deployment
be able to participate in such deployment. And we are seeing all kinds of
vendors that they can come up with some part of the stack. Since now the interfaces are being
defined and it's not a monolithic one, you let other businesses kind of participate in this whole
stack kind of deployment. Since we're splitting the stack,
it allows us to have different slices of 5G networks.
One would be dealing with maybe
load agency required application.
One would be kind of just a regular mobile broadband.
If you think about it,
we start looking at something more vertical
where you have different components
kind of chained together to create different slices of the network. That's something that
before we couldn't do. The new initiatives and new standards and new protocols that are
being developed allow us to achieve a more flexible network, as opposed to kind of a monolithic network,
kind of a traditional one.
So all of this sounds very interesting
to one of our listeners.
Are you all currently hiring?
That was kind of towards the end, yes.
Yeah, yeah, wait a minute.
Yes, Qualcomm is hiring.
I'm working with a lot of system engineers.
They develop from the math to the system design,
and it's pretty amazing the amount of innovation.
The system is really, really complex,
and it needs to conform to a very strict latency requirements,
and all of that needs to happen.
And yeah, we do heavily use C++.
And there are also, I'm seeing from all kinds of vendors that are participating in this
domain, I see already C++20 showing up in open source libraries that are kind of implementing
all kinds of protocols.
There's all kinds of configuration protocols
that are being deployed,
and C++ 20 is starting to show up,
and I'm really excited to see all of that.
And I guess one more question for listeners
who might be interested in working for Qualcomm.
What's the stack like?
Is this all Linux?
Is it C++ 17 or 20?
Any other details you could share?
It is Linux, especially the Mac and above.
Some of the firmware and RF would be C.
Traditionally, what I'm looking at the industry
is that CentOS would be the platform that's being used.
A lot of people just said,
never mind, I'm not going to apply.
No, no.
We're actually looking at a very cutting-edge C++
when we build the application.
Now, don't be afraid of,
oh, CentOS has maybe an older tool chain.
It's easy since we are looking at containers.
CentOS just provide me kind of the Linux kernel
with all kinds of real-time patches that I need.
But I can easily take any Clang or GCC toolchain, deploy it on a container, and
I'm completely isolated from the file system that is running.
Again, if you have the necessary patch set for the kernel, for real-time kernel, you're
pretty much good with any operating system.
And again, the fact that we are actually looking at virtualization,
the entire industry is looking at virtualization.
The host is not really important.
The kernel is important, but the OS is not really important.
So I'm sorry, are you saying you actually are using the Linux RT set or whatever?
Yes, there are all kinds of, yeah, I'm not part of the Linux kernel group,
but there are patches that are needed in order to get all kinds of scheduler-related
real-time capabilities that, for example, CentOS provides that.
This is why the industry,
players in the industry are looking at CentOS.
But again, don't mind about it.
The toolchain is something that we can have control over.
Interesting.
So, I mean, not to spend a bunch more time on Docker
or anything like that, but if your host, I mean, not to spend a bunch more time on Docker or anything like that,
but if your host CentOS has got the Linux real-time kernel
and that's what you care about,
and then you're running a Docker container,
which isn't running its own kernel because it's using the host one, right?
Are you then running, I don't know, Ubuntu or Arc Linux or whatever?
Are you using a completely different distribution in your Docker container
just so that you get better tools and whatever?
Yes, yes, I can run anything.
And I would actually end up with hybrid types of containers.
Just it depends on the use case of what I want to run.
Each, I would say, process would run in its own container.
There are daemons that are running,
and they would run in a specific container that There are daemons that are running, and they would run in a specific container
that I decide that would be slim enough
and good enough for this specific use case.
So yeah, you'll end up with hybrid containers.
Interesting.
And I'm wondering for some of our listeners here
who are in the scientific computing world
and they feel like, oh, I'm just stuck,
I have to use GCC 5.2 because it's the last thing
that the sys administrators put on this CentOS cluster that I've got.
Maybe that gives them a little bit of a workaround
that they had never thought of before.
Yeah, people don't understand.
You're not logged into the tool chain on the host.
Every day I'm looking at, oh, I need something to try out.
I either build my own Docker file, but usually I don't even need to.
I reach out to Docker Hub and get something like Ubuntu 20.04 or something like that.
And that's it.
I have the latest tool chain and no one cares.
The host would not care.
It doesn't know about that.
So you're not locked to a specific tool chain just because you're running on a specific host.
Actually, don't do that.
Don't run.
If you can and you measure
and you don't have any issues working with Docker,
then do that.
Well, Kobi, it was great having you on the show again today.
Thank you so much for telling us about the 5G tech.
It was certainly very interesting.
Thanks.
Thanks for having me again.
Good day. Thanks so much for having me again. Good day.
Thanks so much for listening in as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in,
or if you have a suggestion for a topic, we'd love to hear about that too.
You can email all your thoughts to feedback at cppcast.com.
We'd also appreciate if you can like CppCast on Facebook and follow
CppCast on Twitter. You can also follow me at Rob W. Irving and Jason at Lefticus on Twitter.
We'd also like to thank all our patrons who help support the show through Patreon.
If you'd like to support us on Patreon, you can do so at patreon.com slash cppcast.
And of course, you can find all that info and the show notes on the podcast website at cppcast.com. Theme music for this episode is provided by podcastthemes.com.