Embedded - 296: Train Me Later
Episode Date: July 25, 2019Shruthi Jaganathan spoke with us about recycling, machine learning, and the Jetson Nano (@NVIDIAEmbedded). More about the Green Machine, the computer vision, machine learning, augmented reality way t...o sort your lunch leavings. The code is available. The system was on a Jetson TX2 developer kit and Shruthi has been moving it to the physically smaller and only $99 Jetson Nano developer kit (buy). Shruthi has been getting into AI with the Jetson Two Days to a Demo as well as NVIDIA’s free Getting Started with AI on the Jetson Nano online course. For more information about FIRST Robotics Competition (FRC), we talked about it with Derek Fronek on Embedded 257: Small Parts Flew Everywhere. Â
Transcript
Discussion (0)
Welcome to Embedded. I am Elysia White. I am here with Christopher White.
Today we're going to talk about machine learning and composting and the Jetson Nano.
This will be an odd combo. And our guest is Shruthi Jagannathan.
Hi, Shruthi. Welcome. Hi, Shruti. Welcome.
Hi, thank you for having me. Could you introduce yourself as though we met at a robotics competition?
Okay, so my name is Shruti. I'm going to be a freshman at the University of Illinois Urbana Champaign in the fall. So I've been interning at NVIDIA the past two summers,
and I got involved because I did robotics at my high school.
I worked on a lot of the software for a team,
so that's how I found out about NVIDIA and worked here.
Okay, so I just want to make sure that everybody got that you just graduated from high school.
Yes, I just graduated in June.
And you did what kind of robotics?
So I did a program called FIRST Robotics.
I've actually done multiple levels of it.
So freshman and sophomore year, I did FIRST Tech Challenge,
which is a small robot.
It's about like a foot and a half by a foot and a half.
And then in junior and senior year, I worked in FRC,
which is FIRST Robotics Competition.
Those are much bigger robots.
So those are really cool to work with.
But those are fixed platforms.
They don't have anything to do with NVIDIA, right?
No, actually.
So our team and a lot of other FRC teams actually use the NVIDIA Jetson. So you can use it for anything from OpenCV.
So that's like computer vision to also just running neural networks on there,
which is what we did one year. And how did you fare in the competition?
Our team was kind of in the middle of the pack,
but it was really cool to get to talk to a lot of the teams,
which is what I think I enjoyed most.
You learn a lot from the other teams and what they do
and their approaches to solving the same competition problem.
I think we've talked to a couple other first competitors in the past,
and I don't remember neural networks and machine learning
coming up much. Is that becoming more common in the competition or was that something kind of new
that your team did? I think it's becoming more common now that a lot of teams have access to
the Jetson, which I think really helped with like the internship here because there was 12 of us.
And so we were all from different teams. So we took back what we learned about the Jetson to our teams and got to use it.
I know my first year in FRC, that was right after the first summer we interned here.
So we used everything we learned, and we created this neural network that would detect some of the game elements.
And so it could have an auto drive for the robot to pick up one of the game elements in case our robot driver couldn't see it. Which came first, your involvement with NVIDIA or FRC? It was NVIDIA and FRC kind of
at the same time, actually. I would do NVIDIA during the day and I would go back to school
in the evening, work with our FRC team for summer training. Okay, so you were an intern at NVIDIA working all day at NVIDIA and then doing robotics
at night until you went back to school and then robotics was part of school. And then, yeah,
robotics is just kind of a usual thing at school. I spend most of my time in that classroom.
And you are headed off to University of Illinois. What made you go there? major so I thought it was really cool that I could kind of combine it so I can also minor or
major in something else without sticking to just math and CS because I really wanted to be able to
do both that's what I did good choice uh okay now for the most irritating question what do you want
to be when you grow up this is so hard time someone asks me, it's a terrible question.
I know I want to do something like software related and that's kind of it. Like I don't
have like a dream company that some people have. I'm just kind of open to it all. So I guess that's
nice because then I can end up where I end up going. Yeah. I mean, you're going to college. Now you're going to start
figuring out a lot more detail about what's out there. It's good to have an open mind at this
point. Yeah. Okay. So the green machine, this is not part of FIRST Robotics. This is something else.
Yeah, this is separate. This is a project I worked on last summer at NVIDIA.
And what does it do and how does it work?
So it's, I think it's really cool personally, but I'm just saying that because I worked on it.
So what we did is we used the Jetson TX2 on a cart, like, you know, those carts that people
use to like just transport things we added a pole to it
and put a jetson tx2 a camera and a projector on top of it and turned it into the green machine
so what it does is it runs a neural network that we trained and tested ourselves as interns um
so it detects different kinds of waste and classifies it as either trash, reusable, recyclable, or compostable.
And then it color codes these things on the cart for people to see. So it makes it really easy for
people to visualize what goes where. And it saves a lot of the problem of people just throwing
everything into the trash because it's easier than trying to figure out what you could recycle
or compost. So this is, it sits at the end or on the way out of a cafeteria and you put your tray down
and it tells you, oh, this is compostable.
This don't recycle your flatware, put it in the compost bin.
And this is a bottle.
So this is definitely recyclable.
That's what I think.
That's the same kind of thing.
Yeah.
And it has a screen. So it takes pictures, although, and then puts on the screen,
color-coded, what goes where.
So there's a version of it that you could run on a screen if you don't have a projector. So
that's what we originally tested on because we didn't have our full setup yet. So you can run
it on a computer and you just show the camera whatever's on your tray and it'll pop up on a computer screen.
But the final version of it is like it's like a life size demo almost.
So you can look directly at your tray.
You'll see the boxes and the colors projected onto your items.
So if I had like a bottle on there that was recyclable, I would literally see a blue box around that bottle.
This is like augmented reality. Yeah, almost. That was kind of what we wanted to go for.
Okay. And it identifies things with machine learning? I mean, did you take like a billion
pictures of trash and hand code them? Not a billion, but I mean, it was like over 5,000. It felt like that at times,
honestly. We would sit there and draw boxes around each thing and label each thing every day for
hours. And we would get so tired, but honestly, it was worth it because that's one of the most
precise neural networks I think I've ever made. Why couldn't you just use pictures from the internet from one of the data stores?
So we kind of looked into that, but we also thought we didn't really find something that
we thought would work for us. And we also thought since we were working in this cafeteria,
we could find a lot of things that you might not be able to find elsewhere because some places have um like
utensils that might just be reusable but here there's like compostable ones and there's reusable
ones so there's some things like you have to be able to tell the difference like yeah they're
both utensils but they're different kinds of utensils so we thought that problem would best
be fixed if we could take the pictures ourselves and label them ourselves.
And does it work?
I mean, does it really work or does it just kind of work?
It actually works. Like we actually had it in the cafeteria for people to use by the end of our internship.
And that was last year?
Last summer, yeah.
Are you working on it this summer?
No, but I did work on it a little bit trying to transfer it over to the
Jetson Nano. So it's not like a full-on internship like last year, but I am trying to get it ported
to the Jetson Nano so that it's like a more compact version than trying to stick a TX2 on this cart.
And which parts did you particularly work on? I mean, there's maybe actually before I ask that, I should ask for kind of a block diagram of the system. Okay. Input comes from a camera.
Yeah. So there's a camera that looks at what you have on your tray and then it sends that to our
TX2 where we run it through our neural network. So classifies each thing so what we did is we had
multiple classes so we had like utensils we had like plates and bowls we had chip bags wrappers
stuff like that so the neural network classifies it into that then we send it into our script that
we wrote that classifies it from these like I think we had nine or ten classes,
and then it separates them into recycle, compost, trash, or reusable.
So that's, like, the final step,
and then it would draw those color-coded boxes directly onto the tray.
Through the projector?
Through the projector, yeah.
Is the projector HDMI? Is it basically just a screen out?
So it was connected to the Jetson as well.
And like, there's obviously a power source for it. So it has to be plugged into a wall somewhere.
And we just use our like calibration script to align the projector to exactly what the camera
sees. That way, when it draws the boxes, it's actually aligned. And you mentioned OpenCV earlier, so that makes a lot of sense.
What else are you using as far as libraries go?
So we use TensorFlow to train our networks.
The camera we used was a CSI camera.
That was so we could reduce latency because we originally tried
like a regular webcam, but it was just too slow. And we really wanted something that would work
on the spot. So people are not just like standing there waiting for it to draw the boxes. Those were
the two biggest things. Obviously we had the Jetson TX2 as well. And you said you determined that it's utensils and then you determine whether it's
recyclable or one of your categories. Yes. Why do you do the intermediate step?
So that kind of helps if the things that we're trying to find ever change. It also helps for
more accuracy of the network. So for example, there's lots of things that you could consider compostable.
Like there were compostable bowls, there were compostable forks and spoons.
And when you try to classify them all as one class, it becomes really hard to get an accurate network because there is no certain thing that it can kind of latch on.
There's no shape or color or anything that it can use to classify these as separate things.
So then it becomes like 5,000, 6,000 images won't be enough at all, which is why we thought it might
be better to classify them as their own kind of thing first, like separate utensils versus bowls.
But then we would use our own script to combine them later to call them all compostable.
And TensorFlow, did you use Keras as well or just TensorFlow?
We just used TensorFlow.
They had, we used SSD MobileNet.
We trained off of their base network because they had a lot of data in there already,
which kind of helped speed our process along as well instead of training from scratch.
Yeah, that makes sense.
But you're identifying multiple things in the picture.
That's a harder problem than just identifying what the whole picture is.
How do you do that?
So what TensorFlow kind of made it easy to do is we had,
it let us do multi-class detection.
So you had an option to put in as many classes as you want.
And so we would train for all
these nine or 10 classes that we had. So TensorFlow really gave us a good starting point for already
having multi-class detection. But the harder part became when we started adding images from all of
these classes. And then it would start to get kind of confused. Sometimes it would see white
and be confused,
like, is this a bull or a fork? And then that's when we started taking notes about which classes it was confused on. And so we would take a lot more pictures of those things and add
it into our network. But it's simultaneous multi-class detection, or do you only show
it parts of the frame at a time? It's simultaneous. It sees the entire picture and detects whatever's in there.
And TensorFlow has tools for that?
So we use TensorFlow RT as well, and I think that kind of helped it.
Yeah, that's NVIDIA's underlayer that helps.
Yeah, that really helped with the multi-class detection afterwards.
So TensorFlow helped with the training, and TensorFlow RT helped with the multi-class detection afterwards. So TensorFlow helped with the training, and TensorRT helped with the actual testing.
So what part of the project did you think would be challenging but turned out to be straightforward?
I really thought the multi-class detection would be the hardest because the previous year I was trying to work on some multi-class detection.
It was only two classes, but it took forever to get it working.
So I thought that would be the worst.
But I think TensorFlow really helped me in that because I worked a lot on the network in my team.
And so TensorFlow made it easier to get it detecting those multiple classes.
Of course, it was kind of frustrating when it would detect two random
things as the same thing, but it was a process and adding a lot more images really helped make
it better. Did you use a convolutional neural network? Those are usually what you use with
computer vision. Yeah, that's what we used. And then other than getting the right data, was there anything that took longer than you
expected? I think that was the biggest thing. Another thing that took a little bit longer was
our calibration. So I think I mentioned it earlier. We had to have the projector the same way,
looking at the same thing the camera was looking at um so we had to create our own calibration script which one of our teammates
did which i thought was pretty cool he added qr codes to like the four corners of the screen that
the camera sees the projector looks and scans for those qr codes so it can set up like a border and
it also like helped us so we could draw out a border for people to place their tray in directly. So it not only helped like the projector, it also kind of guides people where to place their things so we can actually tell them what we're looking at.
A lot of people find the whole machine learning thing a little difficult to approach. I know I have a few times. How did you learn all this? into the project and like doing a lot of research every time I would find like a problem or we'd
run into an error so like just doing the research along the way I think that was for me the biggest
thing and I feel like that could really help a lot of people as well like it's kind of hard to
get started but once you do you just you just have to keep trying so do you worry about gradient descent and the integration ability of the terms you're using, or do you just throw things at it and if it works, it works?
So you're talking about our green machine network?
Yeah.
So that's kind of cured by when we're using TensorFlow in a way, so we didn't really have to worry about it when we were training our network. Yeah, there's a lot of difficulty with really understanding how it
works as if you were going to build it versus just using the tools out there. And using the
tools seems to have worked pretty well for you. Yeah, I think for us that really helped because
I think a lot of it that goes into it is like, there's a lot of math
problems that goes into it. And that's kind of hard to understand that like a high school level.
So it's nice that we have so many resources that we're able to use.
So have you taken calculus?
Yeah, I took calculus in high school.
Oh, I mean, I would assume so. But I, you know, my calculus went to derivatives. So
I was thoroughly confused by
the giant S signal, S symbol. Oh yeah. Yeah. That was, it took a while to get, there was a lot of
Khan Academy videos and stuff I watched on the side. So a hands-on approach, um, is you did this
through the Jetson TX2 and working with NVIDIA. Yeah. That was your method to a hands-on
approach. Yeah. That was my like, just getting into that project and trying to figure it out,
no matter where I got stuck. That really worked for me. And did you do the two days to a demo
or was there something else that got you started? I did two days to a demo. That was like the first
thing all of us interns did on the first week. We didn't do anything until after two days to a demo. That was like the first thing all of us interns did on the first week. We didn't do
anything until after two days to a demo. So I think that was like a good head start on how to
start working with the TX2 as well as the Nano, I guess, because you can use it for both.
So for people who don't know, will you describe the two days to a demo and what you do?
Yeah. So it's kind of like
a guide to deep learning, I would say. It gets you started all the way from first just kind of
getting your Jetson set up, then it teaches you how to inference using like the previously made
scripts. And it leads you to even trying to create your own network and writing that on the Jetson.
So it takes you like a step by step process. One of the things I like when I first got my Jetson was running their mobile net
and having it recognize things. There was just something magic about putting things in front
of the camera and watching it do very well or very badly. Yeah, I think that was really fun too. I just kind of like
would hold up whatever I find at my desk and like see what it says. I thought that was really cool.
I found it hilarious that one of my puppets, which is a frog, a tree frog,
it actually got tree frog and it was really great. So going back to EnviroNet, is it in the cafeteria now or is it a prototype that's been used in the lab and does it reside in the cafeteria?
So we're still like working on getting it into the cafeteria, but we have like tested it in there.
And who's going to work on it next?
Are there new interns?
So it's still kind of us. So the project was in a completed stage.
But what we're working on now is kind of getting it ported to the Jetson Nano to make it easier to use almost.
Because, you know, the Jetson Nano is much smaller than the TX2.
It makes it more portable and also kind of safer than the setup we have right now.
Were you using the big dev kit for the TX2?
Yeah, we were using the TX2 dev kit on there.
Okay, so tell me about the Nano.
I don't have one of those.
So the Nano is, I like to think of it,
it's like a really small, powerful computer.
And the cool thing about it,
it lets you run multiple neural networks in parallel.
So it's really useful for things like image classification, object detection, stuff like that.
And for me, the biggest thing, it was really easy to set up.
So I was trying it out this summer as well.
It's like it comes in a really easy to use package.
It's like one thing and the whole thing starts running.
So it kind of makes it also easier to use for
people who've never done AI before. It's really approachable, I think. And also the fact that it
is small kind of helps because you can use it on a lot more projects. Okay, so there's a dev kit
and there's the module. How big is the dev kit? I mean, it's not as big as the whole
TX2.
Yeah, it's not big as the whole TX2.
The exact size, it's
79 millimeters by
100 millimeters.
In inches, it's 30 inches
by, or like about 30
inches by 4 inches.
3-3 inches. 3-3, sorry, not 30.
My bad. I don't, sorry, not 30. My bad.
It's a credit card size.
I don't know why I said 30.
Yeah, that's pretty small.
Is that the module or is that the whole dev kit?
That's the whole dev kit.
Wow.
Yeah, it's really small.
I mean, it's called the Nano.
It can't be big.
I know, but is it the Nano. It can't be big.
Is it comparable with respect to computational power of the Jetson TX2?
I think it's more of an entry-level product.
So it's for people who've never done AI before, it's nice to get started with, but computational power-wise, it would probably be like the TX2 would be after, and then the Xavier is obviously the
most, has the most computational
power. I don't know if this is
right, and I don't know if you're the right person to ask,
but I think of the TX2 as
like four Raspberry Pis
squashed together, Raspberry Pi
3s, with a better
GPU. And then I think
Way better GPU!
And I think of the Xavier as kind of two TX2s squashed together.
And so it sounds like the Nano is maybe a half a TX2,
a three quarters of a TX2.
Yeah, that sounds about right.
Okay.
And so as you're trying to port it, are you running out of space?
Not at all, actually. So we have like an SD card in there that also increases its storage. It's
also what I use to get Jetpack on the Nano in the first place. So storage wasn't really an issue at
all when I was trying to port it over. What about computational power?
No, honestly, it worked to me. It felt like it was working the same.
It was really easy to use.
Okay, why aren't you done yet then?
Wow.
That's a hard question.
Actually, I think like, so we're just trying to make it even better, I think.
So that's kind of it.
So there was one change, though, porting it from the TX2 to the Nano. It's just the way
that we use the camera because there's a new thing that NVIDIA has released called JetCam,
which is more compatible with the Nano, I think, than what we were using before. So that's kind of
the only thing that we really have to change, which we're working on to get that fully up and
running so we can actually put this in the cafeteria.
And is that hardware or software to work with the camera?
It's software to work with the camera.
What does it do?
So what it helps us do is kind of interact with the camera. So that's what we use to get the image that the camera is looking at.
And we take the image from there and port it into our script.
So that's where we run the actual inference on the image.
So it kind of replaces OpenCV?
In a way, yes.
I think you could say that, yeah, it replaces OpenCV.
I work with that at TX2, and I use GStreamer because OpenCV is too slow
and GStreamer is this massive,
you can do anything with it
as long as you know the right magical incantation
and it really is a magical incantation
that is hard to figure out.
Have you ever seen that?
Have you ever seen GStreamer
or has it mostly been OpenCV for you?
I've seen GStreamer.
I mean, I personally haven't like used it because
it's come kind of in the package for me, but yeah, I've definitely used it for inference and JetCam
uses that as well. Okay. So I shouldn't run out and get JetCam right away if my GStreamer's
already working. But I think JetCam makes it easy to like, you know, interact with the camera.
That's the, that's my favorite thing about it.
You don't have to go into the camera and try to figure it out.
JetCam has methods already set up for you.
That's good.
Yeah.
Did you have any particular challenges moving the green machine from TX2 to Nano?
It sounds like no, but...
It was really like changing the camera.
There was no other challenge with it at
all I'd say it was very easy to switch it over the green machine did you did you come up with
that idea as a group or was it something nvidia had in mind when you got there so the original
idea like that our group started with we just wanted a way for people to figure out how to
throw away their trash properly. And then it
transformed into this idea where we started with the screen and then we were like, oh,
why not make it like AR? So we talked to people at NVIDIA who helped us make that a reality.
And of course, changing it to AR kind of came with its own challenges, but it was kind of cool
to start with that small idea and then we made it
our own. And did you learn anything about recycling and waste systems as you went through this?
I mean, I definitely learned what's compostable and recyclable. I'm like, before I was kind of
like reading the labels and I was like iffy about where things go. But now I know like, oh, this is a compostable like box and this is a recyclable bottle.
So, you know, it helps me throw away my waste properly too.
Do you have any intuition what features it's, maybe not, but what features the network is looking for?
For like, I mean, did you have to distinguish between this is a compostable fork and this is not a compostable fork?
Yeah, she said she did.
Yeah, I thought you said that.
So do you have any sense of what it was detecting to make that different distinction?
I think part of it was like the shape because the shape of the compostable fork versus the reusable fork was actually kind of different.
Like what in our training data.
And I'm sure like color kind of helped a bit because
the reusable ones were silver and compostable ones were more white and we trained those two
networks like those two classes separately so we had a class for compostable utensils and we had a
class for reusable utensils which definitely those two were some one of the hardest ones
to train because you know it's looking for like
geometry and forks are always going to be forks so probably we had the most images of trying to
help the network distinguish between the two did you have any debugging tools to figure out
if it was choosing some feature that you needed to talk it out of?
I think a lot of it was just, we kept on testing it in different environments. And as soon as we
would see that something was getting confused, like we would make a note of it and go back to
our images, see if like we had mislabeled something sometimes or if it wasn't that we would just add a bunch more
images about like each of those um each of those classes so that it would help the confusion a bit
so i think you mentioned you used 5 000 or so training images yeah that seems really low
what i haven't done this as much as at least it even at all. I just learned by osmosis.
But I'm used to hearing about hundreds of thousands or millions.
That makes me happy to hear this works with so few.
Is there a reason that that was doable with so few images?
Is it just because you're looking for very specific things?
If you ask me, 5, 000 is a lot because i labeled them
for us we also use like augmented images in like tensorflow we had an option to add that
and we did a bunch of augmentation ourself so of course we like took 5 000 pictures but in
the network it would change it would train with not only those 5 000 but like the augmented images
as well especially when we added like the augmented images as well, especially
when we added like the projector and the colored boxes of the projector, it would get super
confused because like now all of a sudden you have this like bright light added to your items
and it's like, oh, am I really looking at a bowl or is this something else? So augmented images
really, really helped our training. And so augmented images are things like when you have a picture, you flip it left, right and up, down.
And so now you have four images instead of one.
Yeah, exactly.
What other augmentations did you do?
So we wrote a script to add like different backgrounds and like different colors near the objects.
So that kind of helped with the whole
the colored boxes from the projector issue we did definitely the rotating and the flipping stuff as
well and you mentioned so you have the projector and it's pretty bright and so initially I would
walk up to the machine I would put my tray down and it would be white light, right?
So there's no light coming out of the projector at all when you put the tray down.
Okay. So then it takes some pictures and it runs those through its neural net and it comes up with
an answer. And then I have a blue box here and a green box there does it then try to take those
images and say whatever is a green box is now in this class that reinforces the green boxness
oh so uh so yeah originally that was part of the problem like the colored boxes but
we also it wasn't running like exactly
the same time like it was a live live action thing but it was running at 15 frames per second
so that even that little bit of change uh like so the colored boxes didn't affect it as much
okay so you you tried to make it so that the color didn't lead it. It's kind of like leading the witness in legal terms.
Yeah, we tried to make sure the color wouldn't affect it.
See, I would have done something crazy and probably ill-advised, like try to inter-strobe the camera.
I was just thinking.
And like projectors. It was never seeing it when it was lit, but it looked like it was always lit to the human.
That wouldn't have worked.
Either that or augment the images so that you get one that looks like it's got green on top of it all the time
and one that looks blue all the time so that it wouldn't learn the colors.
But then you can't learn any colors.
You'd have to use grayscale all the time.
Did you use color or grayscale?
We had colored images in our data set. They were all colored. Did you ever consider using grayscale all the time. Did you use color or grayscale? We had colored images in our data set.
They were all colored.
Did you ever consider using grayscale?
We thought about it, but really for us, I think color was important
because it was kind of distinguishing.
I brought up earlier with the utensils.
It did play a factor in that.
So for us, color was pretty important, so that's why we left it.
And recycling rules change.
They've changed a lot in the last few years with respect to plastics.
Would you have to start over with training images or can you do something with the output or your intermediate steps? So the nice thing about the network is because it doesn't
actually the network doesn't classify whether it's recycle or waste or anything all we'd have
to do is go back and change our final script that categorizes each of the classes so for example if
like that fork became recyclable all we'd have to do is go into our script and change that so that it would draw a blue box instead of whatever other color.
And the script, is it Python?
Yeah, it's a Python script.
Did you ever consider monitoring what receptacle the person put the trash in so that you could set off an alarm if they did the wrong thing. Sometimes when we would like,
we would stand like in the cafeteria and ask people to take pictures.
We would see like people dumping everything in trash and we'd be like,
Oh no.
Like, but, but you know,
we never thought about setting off alarms or anything.
Maybe blind them with a projector.
Oh my God.
Sorry.
Christopher's figuring out how to punish the rule breakers.
You could mark the rule breakers with the projector.
You could, you know.
Give them a little red hat.
Yeah, yeah.
And then everyone could point and laugh.
That sounds so mean.
I'm sorry.
Back to your neural nets and the 5,000 images. Did you, was this built upon the mobile net network already?
I mean, did you do the transfer learning thing or did you learn just from those 5,000 images?
So it was built off of that mobile net.
It's called SSD mobile net and it was trained off of the Cocoa
dataset. So it has a ton of images of a lot of real-world things. So of course, it had images
of things that we were also training off of. So that was nice because it helped our network have
somewhere to start. And so if I put my frog puppet under it, would it call it a tree frog?
And then would it tell me to compost it?
Probably not, because that's not one of the things it's trained off of.
What does it do when it gets to an object it's never seen before?
It just doesn't draw anything around it.
It won't recognize it, so it won't classify it.
It'll just be like, there's no color on it at all.
And do you, what does it do internally?
Does it have a log? Does it tell you, I didn't get this, you should take a picture and
train me later on what I should do about it? No, we never added anything like that to our
project. So it was mostly like us looking at it and seeing if it ever classified something wrong
or if it was mixing something up.
So it was just kind of our team working on it.
Okay, so back to the Nano.
You are trying to move the green machine to the Nano,
and it sounds like you're having a bit of feature creep trying to make it better.
When do you throw in the towel and say,
bye-bye, I'm off to college?
I leave in August, so I still have another month to get this working. But I mean,
I've been working on it all summer, so I've had the chance to get it running,
and I think it's doing pretty well so far. Are you going to have another demo?
I don't know, but that would be pretty cool.
And do you put the code online?
Do you make it open source or is it not?
Yeah, it's all on the GitHub.
So if you just like search up NVIDIA Green Machine, you'll find our entire GitHub project.
Do you put your models up as well?
Yeah, so we have just our final model, which is like the best one that we picked.
That's also on GitHub.
So if anyone wants to recreate our project, they have all the resources to do that.
Were you nervous putting it online?
I mean, everybody can see your code and criticize it.
I mean, for us, I feel like not really, because we were proud of the version of the project that we put up.
We had time to get it working.
And the cool thing was we had so many working networks.
We also got to pick which one we wanted to put up online for everyone to see.
So having that choice was nice.
We have working versions of it.
So it wasn't as scary as like it sounds.
Okay. I mean, that's really good. Has anyone forked the
repository so that maybe somebody else is building this or is it still just you?
I think a couple people actually have, like we've gotten a few emails about people, like, doing the project, asking questions.
So that's kind of cool, actually, to see, like, people are actually trying out this project.
And it's nice because it's, like, something we started and people are using it.
It's always a great feeling.
Do you think you will continue with things like this?
I mean, if NVIDIA says next summer, come back and redo it or come back and make it better.
Is that interesting to you or are you done with this and ready to try something else?
I mean, it would be fun because I really like working with AI.
It's just kind of cool for me, especially working with the networks. That's what I do a lot
of my work on because it's like you're teaching a computer to learn like from nothing to all of a
sudden just recognizing all these things. So if they called me back, I think I'd be pretty happy
to come back and work on that. As you were applying to colleges, did you talk about the green machine much?
I talked about, yeah, I actually did.
I brought it up in some of my essays.
So that was pretty cool.
I had a project I was proud of to talk about.
It's really nice when we talk to people graduating from college and wanting jobs.
Having a portfolio is always something that's pretty cool
because it means during an interview, you're talking about something you know
instead of being asked questions you don't know.
And so you've already got part of your portfolio.
Are you going to blow off school entirely and just go get a job?
I mean, you're in Silicon Valley.
Yeah, she sounds like the sort of person who would do that.
Probably not.
I'm not that much of a risk taker.
So, you know, still going to college.
Embedded, sued by Shruthi's parents.
All right.
I've already asked you what you want to be when you grow up.
And that's a terrible question. It's in my outline, but it's in my outline for everyone, not just you, although usually we don't ask it.
Even people who are theoretically grown up.
I am so not a grown up. Don't even. But how do you decide computers? Do you ever think, yeah, I would rather do art or I would rather write or...
I think that all the time.
Music for you.
Like if I'm being completely honest, like until high school, I wanted to be a psych major and
like become a doctor or something. And then I started doing robotics and I was like, wait,
this is really cool. I kind of want to do this now. And I think the NVIDIA internship was kind of cool because that was like I realized, oh, wow, this is actually a job that I could see myself doing a few years in the future.
So, like, yeah, I had other thoughts before, but like high school kind of solidified it for me.
Like, I really do want to do computer science.
If you think about it, you are a psych major.
I mean, you're just hand-tuning a little brain.
I guess.
If you think of it like that, like, you know, I'm already majoring in something.
So that's pretty cool.
I minored in cognitive psychology and theories of learning.
And it isn't directly transferable to neural nets.
But sometimes I'm like, yeah, okay, I kind of remember this
from psych. So you don't have to give up. I mean, you can do both.
Yeah. If I really want to, yeah.
Well, and you should always take humanities. It helps you connect to people.
Have you worked with other hardware? I mean, you mentioned first, but have you
worked with other microcontroller platforms? I've done a little bit of work with the Raspberry Pi,
but I think my biggest experiences come from first NVIDIA, working with the Jetson.
Do you think you'll continue to do that sort of thing in college? Do you know already if you're
going to be part of the robotics club or you're going to take an awesome class in computer vision when you get there?
I really want to.
I haven't decided yet.
Like, I also want to be able to try other things.
Like, I know I want to, like, try out for an acapella group in college.
So stuff like that.
Like, maybe I will join their robotics team, but I'm just kind of leaving it open to me figuring it out when I get there.
I think that's awesome.
I think that's completely awesome because, yeah, so much changed going from high school to college that if I'd made any plans, they would have just been dust.
And there's like so many options too, which is really cool.
Well, and Urbana-Champaign, it will be nice for a few more months.
Yeah.
And then you'll study all the time because you can't go outside.
Exactly.
Are you really sure about going to Illinois?
I mean, this is not the time to talk.
What are you doing?
I'm trying to get her to go to Stanford.
It's too late.
I don't know for me like weather was never a factor in like my college thing i was like i will get used to the cold weather if that means i go to one of like the
programs i really like so uh that's why like it was just a program for me so i didn't do an
internship until junior year junior college junior college yeah summer
summer between junior and senior year college uh i can't imagine what it would have been like
doing it in high school do you have any advice or or things you kind of took away from
from uh jumping straight into kind of working at a company? I think the thing like for me was it's like a
really good experience because you learn a lot and it's like a chance to work in the industry
and there's not like a lot riding on it. Like if you mess up, no one's going to get really mad at
you because you're a high schooler. So like it gives you that room to figure out what you want and
figure out like how to solve this problem in the best way with like a little bit of room to mess up.
So it felt more relaxed, like it wasn't a high pressure thing. So I would say like if you're
in high school and you get into an internship, like just do it. And it's not the end of the
world if like your project isn't the best thing at the end. Like it's an opportunity to learn. How did you get into NVIDIA as an intern in high school? I mean,
there are people who would be very excited to be interns after college. How did you swing this?
So it was through like the first robotics program. They work with NVIDIA pretty closely. So NVIDIA gave us like an opportunity
to interview if like through the first program for the internship. And then through all those
interviews, they just selected a group of high schoolers to come in. That's pretty cool. Yeah.
All right. I know you have things to do and we have otters to see.
Do you have any thoughts you'd like to leave us with?
For anyone listening, I just kind of say, like, get involved,
like, do things you're excited about,
and don't be afraid to try something new.
That is always good advice.
Our guest has been Shruthi Jagannathan,
soon-to-be freshman at University of Illinois Urbana-Champaign and graduated senior of Cupertino High School.
Thanks, Shruti.
Thank you for having me.
Thank you to Christopher for producing and co-hosting. Thank you to Tom and Jakey Poo.
Really, you're going to make me say that? To Jakey Poo for questions. And thank you to Tom and Jakey Poo. Really, you're going to make me say that?
To Jakey Poo for questions.
And thank you to our Patreon supporters for Shruthi's mic.
And of course, thank you for listening.
We are having a party.
It is September 7th in Aptos, California.
Invitations or Eventbrite or whatever it's called goes out on August 1st.
You can always contact us at show at embedded.fm or hit the contact link on Embedded FM.
And now a quote to leave you with.
Memories are not recycled like atoms and particles in quantum physics.
They can be lost forever.
That actually comes from Lady Gaga.
I was kind of surprised.
Pretty cool, though. Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are
Logical Elegance and listeners like you.