Embedded - 212: You Are in Seaworld
Episode Date: August 24, 2017Kwabena Agyeman joined us to talk about making OpenMV (@OpenMVCam), an easy-to-use camera and control module with built-in machine vision functions, all interfaced via MicroPython. To learn more about... computer vision, Kwabena suggested looking at PyImageSearch or reading the April tags code as it is a good introduction to image manipulation and matrix operations. Some other interesting links: Ferrari World, view from satellite Cloud Atlas (on Netflix) DIY Robotics from Chris Anderson: DIY Robocars Kwabena worked on the CMUCam (version 3) The Amp Hour had a good episode about MicroPython Elecia likes this introduction to linear algebra, matrix operations, and singular value decomposition (SVD) OpenMV on Hackaday.io and for sale at SparkFun The future of OpenMV might include Google’s MobileNets Kwabena gave a talk about the OpenMV manufacturing difficulties at the Hackaday Supercon 2016 and he plans to be there for Supercon 2017 (Pasadena, November 11th and 12th)
Transcript
Discussion (0)
Welcome to Embedded.
I'm Alicia White.
My co-host is Christopher White.
As you know, I like robots and have some computer vision plans.
Also, as you probably know, the whole podcast is just a front for me to ask people impertinent
questions about their projects,
especially those projects that are similar to the ones I'm working on.
Please don't tell our guest who is Kwabena Adjeman because I am excited to have him on the show and wouldn't want to ruin the surprise. He's sitting right here, by the way.
Before we get started, in last week's show with Dennis Jackson on how to get into embedded software as a career, we mentioned my book, Making Embedded Systems.
One of the things I liked writing best was the interview questions at the end of each chapter, where I talk about what you look for in answers to embedded software interview questions.
And we didn't talk about that last week, but I wanted to mention it because it was so fun to do.
Hi, Kwabena.
Thanks for joining us on this side of the hill.
Thank you very much for having me.
Could you tell us a bit about yourself?
Well, I went to Carnegie Mellon University,
got a degree in electrical and computer engineering,
got into electronics back in ninth grade
with Forrest M's third kit from RadioShack.
I don't know, you can call me an engineer's engineer.
I love understanding things and getting into the technical details.
I started working on OpenMV, which is what I'm here today doing this radio interview for,
as a solution to being able to give a robot or any other embedded system vision
in an easy-to-use format.
And of course, we have many questions about that.
But before we ask them, we're going to do lightning round, where we want you to answer
short questions with short answers.
And if we are behaving ourselves, we won't ask you why and how and all of the questions that will come up
inevitably.
And it never works.
Mm-hmm.
All right.
Verilog or VHDL?
Verilog.
Favorite movie
or book
or any sort of fiction
that you encountered
for the first time
in the last 12 months?
Cloud Atlas.
Just watched that
last night on Netflix.
All right.
Amazingly had a good movie on there.
Favorite man-made space object?
Hmm. I would have to say Planet Labs
planets. Sorry, Planet Labs doves.
Alright, well that was kind of cheating. Little or big Indian?
Little Indian. Beach or forest? Beach. I grew up of cheating. Little or big Indian? Little Indian. Beach or forest?
Beach.
I grew up in Miami.
Excellent.
That was just a question to ask you which one we should direct you to when we're done here.
Okay.
Got it.
Preferred voltage?
Preferred voltage.
3.3 volts.
Favorite animal?
Favorite animal.
Barracuda.
That's not an animal, though. Oh, sure it is. Okay. It's. Favorite animal. Favorite animal. Barracuda. That's not an animal, though.
Oh, sure it is.
Okay.
It's not a plant.
Yeah.
Well, it's a fish.
Technical tip everyone should know.
Hmm.
Don't reverse power and ground.
It's a surprising issue.
Surprising issue of people every board i get or have gotten historically has
had that switched uh do you like to complete one project or start a dozen uh complete one project
you mentioned planet labs and the doves which image the earth regularly. What's your favorite region of the planet to look at with that imagery?
You know, I haven't really had time to do that, but I saw the most crazy images over
the United Arab Emirates.
They've got a Ferrari world out there.
Ferrari world?
Yes, Ferrari world.
I like to believe it's in the shape of a Ferrari that you can see from space.
It's even better. I'm going believe it's in the shape of a Ferrari that you can see from space. It's even better.
I'm going to have to look that up. Yes, definitely look at it. It is
some crazy stuff. Anyway.
Okay, so on to the topic of the show, OpenMV
and computer vision. You mentioned that OpenMV
is a controller
and makes computer vision simpler.
Could you give us more information?
Okay, so the basic idea behind the OpenMV cam
is to more or less give you an environment
where you can write the most minimal amount of code
to get the biggest result.
An example would be, would you like to build a fully-fledged line-following robot
with only 92 lines of code that you can copy and paste like an Arduino script?
OpenMB CAM can kind of let you do stuff like that.
Pretty much, if we look a little bit back in time, you know, part of the reason why
the Arduino is so popular, well, starting to get kind of, you know, fighting with the Raspberry Pi
right now, but one really big thing that made it really nice to use was it had a very simple
editing environment where you focused on one file,
and you put all your code in that file,
and you could then get something working with just the code in that file,
and there wasn't you fighting with different tools or opening different files or so on and so forth.
And of course, this has its limitations as the project gets bigger,
but if your goal is just to get something really simple and really quick going, or if you're a student in a class, it's kind of the most optimal development solution
that's not dumbing it down to the point of like Scratch where you're dragging blocks around.
And so the OpenMP cam tries to mimic that a little bit, but with Python instead of C,
and the ability to actually use computer vision algorithms.
In particular, we do stuff like
there's a library called Zbar
which will detect barcodes.
It can detect all these different
types of barcodes, 16 total in fact,
more barcodes than you thought there actually
existed in the world of barcodes.
And libraries,
a giant mess of code,
lots of things to understand and see,
very not super easy to interface to.
So we wrap all that up into a simple function called find barcodes.
You can call an image in Python and you do that.
You point the camera to barcode and it then returns to you a string
that tells you what the barcode number is.
It's so easy you can't fail. We have similar things for like detecting QR
codes and detecting data matrices, which are a barcode
you probably haven't heard of, but it's on your mail.
Anyway, but the OpenMP Cam was kind of built to wrap
up very, very complicated computer vision tasks into a
simple one line of code function call.
And then to be able to let you take that output
and do filtering of objects in Python
to basically get your robot to do something
or get some kind of mechanical system to do something.
But then really have a programming environment
where you're just working on one file
so you don't really have to go off and start a huge project
of lots of different compile options or so on and so forth.
You just have one nice Python file that contains your program
and using that you can build your application.
Sorry, that was a long-winded...
No, that was very good.
That's good.
So it's Python, but it is an embedded system, and it's more than a smart camera, it's a controller.
The heart of this is a Cortex-M4F?
No, we actually upgraded to an M7 now.
Oh, wow.
And so, it's really important to understand, the Cortex-M7, if you look at its performance numbers it's really a pentium
2 processor you know a pentium 2 processor by the way those are the same kind of cpus that were built
into the original b2 bombers you know those super ultra stealth looking planes that carry nuclear
bombs and whatnot you know that's the same processor so we've got you know equivalent
performance from back in the 90s you know know, whatever. But on this little platform,
it's actually about half as strong as the Raspberry Pi Zero
in terms of compute power.
But one thing to keep in mind is,
since its RAM is all on board the chip,
it never has to go off memory to access DRAM.
And the big thing to keep in mind here is that
it's got a 200 Mhertz plus bus that's 256
bits internally to access d mold accesses internal sram and so that actually gives you about 50
gigabits a second or so of just raw performance to access memory and this means you can actually
do vision algorithms at pretty high speeds okay pent. Pentium 2 with super fast RAM
and you use that
to do vision
algorithms. And you mentioned
line following.
Yes. So
the one thing that's good to do
that I'm doing now with the system, I'm working with
Chris Anderson with the
DYI Robocar Challenge.
So basically we're building, right now at least, line following
robots, but they're based on go-karts, well not go-karts,
RC cars. And so these guys are moving at
about 1-2 miles an hour, so they're not going slow anymore.
And we want to get those things going up to around 5 miles
an hour or so.
Of course, these are all to scale.
This is a 1 16th car, so if you blew that up into how fast it was actually going,
it'd be driving faster than a person down the road.
But the idea is to race them really fast around tracks. And once everyone's mastered the ability to follow a line,
we want to start adding things like, okay, now there's intersections,
now there's other cars on the road. Can you avoid other people while following the line? Can you merge or other things
like that? Anyway, it's just in the beginning right now, but our system, OpenMP Cam at least,
lets you easily build a line following robot. I've got an example I'm putting up online
that more or less is kind of, you know, an instructable. Copy and paste some code,
buy all these parts,
wire it up correctly, and bam,
you've got a line-following robot that can follow a line outdoors,
down the road, outside your house, whatever.
Anyway, you asked me about algorithms.
So the algorithm that we use to follow a line is something called linear regression.
It's not particularly challenging, but there are two different types of linear regression.
One is called regular least squares, and that's one a lot of people learn about.
If you took a statistics class, they probably had you have to do that back in college by hand.
And there's another one called robust linear regression.
Well, there's another one called robust linear regression. Well, there's another class
of linear regression techniques. So least squares has a problem in that if there's any single pixel
as outlier, it immediately draws the line through that outlier instead of all the other pixels that
look like a line. Whereas robust linear regression will do something that requires an n-squared type
of algorithm where it wants to find the slope between all pixels, and then it kind of finds the median of those slopes.
And this lets the system act kind of like how a human would think.
You know, if you give a human a picture of some lines, things, and an image, the robot will basically draw a line through that just like a person would.
And so using that then, you can take the output of where that line is and then drive your motors to keep the robot centered on it.
That's the same sort of thing self-driving cars do when they are looking for lanes yeah markers right um so if you just have
one line this works um for a self-driving car what they're going to want to do is try to break down
the line between the lanes so like you can imagine there's an imaginary line that is formed by all the pixels
within two lines on the outside of a lane.
And then you would run the linear regression
on that lane that's not there.
You can't see normally with just colors.
So it's kind of a two-level step.
You have to figure out where that lane begins
and then find all the pixels in that
and then run linear regression on that.
But yes, very similar technique.
You just mentioned color.
Is the OpenMP camera in color?
Yes.
So our sensor, we're actually using a very cheap OmniVision sensor.
It's an OV7725.
It can do 640x480 at about 120 FPS,
and we can actually achieve, depending on the algorithm, up to 90 FPS.
Is the frame rate more important than the pixel density for certain applications?
Yes, very much so.
So really, you can get away with 80 by 60 pixels,
and 160 by 120 pixels for a lot of applications.
Having more resolution doesn't benefit you at all, really.
It just adds more processing time.
The big thing resolution helps with is distance. So if you need to resolve smaller detail at a longer distance,
that's where resolution helps. But if you're looking for large blobs or whatnot,
it's completely useless.
It seemed like when we had Pentium 2s, anytime you talked about machine vision or computer vision,
which were the early days then, it was all about lighting.
It was about having a maximal amount of light.
That doesn't seem to be as big a problem now.
That really is a...
What's helped is the camera sensor technology.
So, you know, one thing to appreciate,
while I say the OV7725 is a cheap sensor,
it's got a whole processing engine in it
that does stuff like gamma correction,
that does color matrix transforms
on it, that automatically boosts
the contrast, that does auto
exposure control,
that's doing auto
white balancing. So the sensor on
board really fixes the pixels
up dramatically compared
to probably
a CCD back in the day,
which would just give you raw sensor readings
that you had to correct yourself.
And so it's, you know, these cameras are quite good.
So the camera's gotten smarter,
and then the way we interact with it has gotten smarter.
So less of your CPU is being spent just massaging the image
into something you can start doing machine vision with.
Yes.
Okay.
But yeah, no, lighting still is important.
So when you're trying to follow a line outdoors,
you've got to use...
The reason I like linear regression is because
it's somewhat of a lighting invariant algorithm.
What I mean by that is all you need to do
is detect some subset of pixels.
You don't really have to detect a perfectly good...
Like there's blob tracking algorithms you can use where you try to detect blobs of pixels. You don't really have to detect a perfectly good... Like there's blob tracking algorithms
you can use
where you try to detect
blobs of pixels
and follow those.
But linear regression
is a little bit more robust
because even if the image is...
Even if there's about 20% noise
of false detections,
it can still handle that.
It'll reject those outliers
versus blob detection
type techniques
fail if there's too much noise.
So back to color, because that's going to come up with blob detection in a minute.
When you do line following, do you bother with masking it so that you're only looking at a subset of colors?
So if you're following a yellow line or a white line, you do some masking where you choose that color.
Yeah, so we actually have a function on the OpenMV cam.
It's called getRegression.
And so you give it a list of color thresholds.
And what it's going to do is it's going to threshold the image then
with that list of color thresholds.
So the color thresholds are like min and max of red,
min and max of green, min and max of blue.
We don't use red, green, and blue directly
though. We use something called the lab color space, which is lighting invariant, but it's still
like a min and max of each channel. And then that basically gives you a binary image, which is just
white or black pixels. And then the linear regression is run on all those pixels that
are white in that image. But this function kind of does all those steps in one go. And that's one of the nice features of the OpenMVCam is that in an OpenCV world,
you would actually be required to first binarize the image and then run the linear regression afterwards, making that two steps.
That doesn't sound that bad, but for other algorithms like line detection, you have to...
Let me think, you got to do Sobel image filtering first
to get like straight lines in the image.
And then you have to hit that with a Huff detection.
And then you get like this map of this 2D accumulator map
that tells you what the strongest lines are.
And then you got to find those strongest lines.
And then you got to take that and do this other step
and just lots of different steps. And I kind of package things like that into one function call
where it just kind of gives you the result you care about. It does mean you do
learn less about how these things work, but most of our customers
kind of just want to build something, not get the PhD.
It's still a good starting point. I mean, do you
talk about how those things work in documentation?
I wish I had time.
Yeah.
Right now, the product is new, so it's mainly just people are asking for features,
and all my time is spent implementing them.
But as the product gets more mature, we're going to have more time to come back and actually talk about that.
So you're trying to build up a library of, these are the kinds of things that people want to do with machine vision
and so here is the function that does that whether or not it takes five or two or 18 steps
i'm going to pick the you know yeah the idea is um really what i want to do is uh for every
function call they more or less take the image and then output some list of objects that are the results you want.
So for QR code detection, the QR code detection library has a ridiculous number of steps it does,
all kinds of crazy things to find QR codes.
But from your point of view, all it does is you call it,
and then it returns to you a list of QR code objects that are in the image.
You don't have to worry about all the machine vision that went on in the background to actually make that list of QR code objects that are in the image. You don't have to worry about all the machine vision
that went on in the background
to actually make that list of objects.
Also, the big thing is because we're on a microcontroller,
we only really have one frame buffer.
So if we had to edit the image to do that detection,
it would kind of ruin the experience.
So what we do is we create a second copy
and a temporary RAM space that we have,
and we can do all the machine vision processing there.
And then we just return the results to you, and then you can edit the main image and draw like boxes on the QR codes.
You mentioned Arduino, and I can see how that is a good comparison because to run a servo motor in Arduino, you basically just set up the servo library.
You don't worry about whether or not you have to do PWM or I squared C to another device.
No.
You just move the motor.
Yeah.
And that's what you want to do.
You just want to follow the line.
Show me where the line is.
Don't make me do all of these other things.
On the other hand, once you've done that
and it doesn't work the way you want,
people often want to dig in.
And some of that is learning about these things that you were saying.
Well, you can.
So all our firmware is open source.
Everything is in C.
And you can go online and download our compile tool chain.
We actually just use GCC, totally free compiler setup, no Kiel tools.
I think that's how you pronounce it, Kiel?
No one knows. We say Kyle, but Kiel seems very reasonable as well.
Kiel would also be fine.
Kiel.
I want some of that. Let's just call it Kiel.
Well, it's GCC
so it's free. It's not $10,000
to get access to the compiler.
And we actually, our
OpenMVCAM has a built-in USB DFU bootloader.
And our tool chain actually comes with scripts
to be able to reflash your OpenMP Camera using that.
So you can download our build system very easily.
We have a webpage, a wiki page,
describing exactly how to install it and everything,
very detailed.
And from that, you can download the C code
and edit it, add new algorithms.
If you want new features in the Python level, you can add them. If you want download the C code and edit it, add new algorithms.
If you want new features in the Python level, you can add them. If you want to change the firmware and how it works, you can do it. It's your platform. You can go ahead and edit anything
you feel like. So this is a microcontroller that is running a Python, MicroPython interpreter with additions for camera vision, but it is written in C.
Yes.
Okay.
So the OpenMVCAM is actually, I want to give some credit to Damian George.
It's really his MicroPython is what's making this all possible and really my vision possible with this.
So MicroPython, if you haven't heard about it,
is an operating system.
Well, I wouldn't call it that.
Interpreter?
It's an environment that runs on a microcontroller.
Basically, it more or less lets you write Python code
that gets compiled, turned into bytecode.
So, the system isn't reparsing text constantly.
It's looking at bytecode.
And that bytecode pretty much become like pointers to a bunch of statically linked C file.
All the code in C.
Okay, I'm going into too much detail.
No, no, no.
It's good.
It's good.
Okay.
They can take it. I know at least one person
who would be interested in this
okay
so like
your bytecode
is being executed
by some kind of
virtual machine
right
and that virtual machine
when it sees like a byte
that
specifies this function call
it will go off
into the C code
and
basically
convert all your Python objects as
the function call arguments into C arguments, and then call a C function, and then bam, you're in
C world. The user doesn't see this though, but from what's being actually executed, you're in
C world, and bam, things just execute like they were in C. One really nice thing though about
MicroPython is because you have this decoupling between C land and user space, you don't have to follow standard C cleanup procedures for malloc failing.
For example, in OpenDB CAM code, we basically just set jump.
We'll throw an exception every time we run out of memory or anything bad happens.
And this makes our C code rather easy to write
since we don't have to clean up ever.
This is because MicroPython has an exception system built in.
So if any function fails to allocate memory,
because like, let's say you want to call,
find QR codes in a 640 by 480 image.
Well, we don't have the RAM.
Our library code is written to, you know,
as it has more RAM available in the system,
it'll just work.
So you could actually use our code on a desktop computer, and it would be totally fine.
Anyway, so if we don't have enough RAM, we just throw an exception saying we're out of memory.
And this allows us to write really straightforward C code that doesn't have a lot of cleanup.
But in user space land, we don't actually leak memory because Python has a virtual
machine that's doing garbage collection for you. So if you, like in C land, if I do a malloc,
and then I have to throw an exception to get out of there, I just leave that memory lying around
and it gets cleaned up by the garbage collector whenever you're out of RAM.
That seems sort of magical. And yet this is a capable enough processor that, yes,
you can have that overhead.
And Python is so much easier to use at this level.
So, there's MicroPython, which is a thing.
And there's OpenCV, which I've used in Python.
But your device does not use OpenCV.
Yes.
It would have been lovely to be able to do that, but OpenCV does not respect memory too well.
An example would be, so a lot of times I'll download code and you'll find things like,
let's say there's an image that is only storing values that are maybe between 0 and 20.
Like, that's just the terms of different values.
But they'll use doubles instead of a byte.
Right.
And so you find abuses like that all over OpenCV, or they'll just assume the stack is infinite and they'll just allocate three or four
megabytes on the stack
just randomly or you'll find
things like
so
I actually, I ported
the AprilTags algorithm actually to the
OpenMVCAM and this was pretty
it took me about a week of effort because the library
was written for C, or was written for a desktop computer.
But I have to say, I met the guys who created that in person, because they were impressed
by me doing that porting, and in fact, getting it to run on such a low-level embedded system,
because that was kind of their goal on simplifying their algorithm.
But a few of the things I found in there were
they wanted O of N1 time
to access a hash table.
Well, sorry, let me back up.
Basically, when you're doing
April tags, or these QR code looking
things, you might have seen them on Boston Dynamics
robots.
They look like giant, low-resolution
QR codes.
So, one of the tasks, after you've detected one, and you have the bit pattern, what they do They look like giant low-resolution QR codes.
So one of the tasks, after you've detected one and you have the bit pattern,
what they do is they have this huge lookup table that's about, let's say, 512 patterns.
And what they'll do is basically you're supposed to linearly search through the lookup table and compare the bit pattern you have to the bit patterns in that table.
And what you're doing is a hamming distance.
So you XOR all the bits together from each detection.
And then count the number of ones.
And that's how far away you are from that code.
And you find the one at the lowest distance.
And that's most likely the code you're looking at.
And that works really well.
But they decided, hmm, we've got to make this better.
We'll just make a jump table.
Yeah, no.
So what they do is they compute every possible bit pattern that's off by one bit, off by two bits.
And that uses 20 to 30 megabytes of RAM.
Now, it has all of one time, though.
Oh, yeah.
That would be super fast.
But here's the thing.
It barely matters, though, because most of the work is spent detecting where the QR codes are.
Very little time is actually spent searching through that table.
Oh, wow.
See, this is a good example of improper optimization.
You're not optimizing the thing that takes a long time.
You're optimizing the thing that's fun to optimize.
Yeah.
Well, I don't know.
Maybe their application, they could,
so they're using a desktop class computer with lots of memory
and they can handle a lot more, a larger image.
And so if more resolution, they can see farther.
So it's possible they could have seen maybe 40 tags in the field of view.
But still then, they're allocating 40 megabytes or so, you know,
for just this lookup table so they can do all of one access.
And that's only counts, that's only two Hamming code bits off.
So you could go to three, or you could go to four.
Sometimes you could go to five.
How far do you want to go?
I mean, if you have enough memory, it doesn't matter.
I've done things like that for quote embedded systems
where I was on a PC platform.
It's like, oh, well, this would go a lot faster
if I just grabbed a gig of memory and made a gigantic lookup table.
And I did that because I could.
But those are the things people don't think about.
If they're
trying to make something that's generalizable to all
sorts of platforms, yeah, it's a very difficult
problem.
It's really surprising that
since I was able to get AprilTags running
on the system, I'm not really that
concerned about its capabilities anymore.
I mean, we've got the Zbar
library running on it.
We've got something called LibDMTX,
which is for data matrix detection,
April tags, QR code detection.
All of these are, you know,
people would consider very high-level machine vision things
that couldn't run on a microcontroller.
Totally impossible.
And yet, here we have them running on the system,
and it didn't require that much work.
Okay, so I'm picking up the system
it is uh much smaller than palm size so pretty small and it's got the camera which is pretty
obvious and it's on a board um which isn't very big but it's got stackable shields yeah so that
was ibrahim my co-founder's great idea this is part of why so by the way the
openmb cam is his baby i kind of joined on to uh make it a commercial success and i guess it's
become my baby since i've been working on it for two years now but um what what really struck me
about um the idea was uh pretty much he had so many good things in one package. The idea was, okay, we've got this MicroPython environment,
which really simplifies how to deal with machine vision.
And then we've got this stackable shield setup
where you can basically, we've got this form factor
where you can just add new modules to it.
And stuff like we used to have, for example,
a thermal sensor that could do a 16 by 4 pixel resolution to do thermal imaging, pretty much.
It wasn't really that sellable, though, because when you worked out the math for what the price it needed to be, it needed to be like $150.
Yeah, thermal sensors are.
To keep the company in business if we were selling it, and that's kind of beyond the price anyone wants to pay but um you know the idea is that you can add new like for example we could have a shield that
had a spy camera on it and you could have two cameras um we have an lcd shield you can put on
the back and then you can see what the camera sees as it's moving around cool i liked that
and it updates fast too it can do about 30 fps on the lcd skilled this middle one has a has some
sort of wi Wi-Fi.
Oh, I was going to say it has an antenna on it.
Okay, so that's Wi-Fi.
Yeah.
So our Wi-Fi support
is okay right now.
Our next update, though,
is going to enable
Wi-Fi programming.
What I mean by that
is literally you'll be
able to connect to
the system like if it
was attached via USB
over Wi-Fi.
We've got...
So what really makes
the OpenMP Cam go
besides all those things is the fact that we
provide you a good computer development
environment. We have an IDE
which kind of smooths over all the
uglies to interfacing with
this system and programming it pretty much.
And it runs on PC or Linux
or Mac. And it's going to be Raspberry
Pi soon also because folks have asked for that.
And I can type in my
algorithm, I can test it,
and then I can take it off of my computer and it can run on its own.
Yes.
So the IDE basically gives you a little frame buffer
so you can see what the camera sees as you go,
and it's pretty much real-time.
And you just write some Python code and then literally just hit the run button,
and that's the compile, load time, execute, all in less than a second, and your algorithm is up and going. And when you want to kill it, you just hit the run button and that's the compile load time execute all less
than a second and your algorithm is up and going and when you're you want to kill it you just hit
the stop button and it does it dies so that again is like arduino where you yeah you kind of play
with it and then you walk away you don't have to have it connected to the computer and this runs
uh on five volts so it will run just on USB, right?
Yeah.
So the microcontroller is an STM32 microcontroller.
And what's really nice about those guys is they have 5-volt IOs, 5-volt tolerant IOs.
Yeah.
Well, they're 3-volt IOs, but they're 5-volt tolerant, meaning that you can totally just plug it into Arduino and nothing blows up.
Yay!
And, you know, it's surprising.
People don't normally comment on that but you know it's
really got to understand that a lot of folks immediately plug this into an arduino yeah
and don't think about the fact that huh this would normally break terribly but with this
stm microcontroller it's totally cool the top uh shield on this one that i'm holding
is it a pwm shield um so it's a servo controller. Servo controller.
So the OpenMP cam, since it's powered by STM microcontroller,
it obviously has the ability to do PWM, SPI, I2C, interrupts on IOPins.
It's got a DAC and ADC pin where it can do DAC and ADC functions.
It's got servo outputs.
It's got PWM and a few pins.
It can do all that stuff.
The problem is the processor usually
gets busy so refreshing all them all the time is a little bit tricky um a lot of that's handled
via interrupts to keep their servos being refreshed but when you add usb on top of that
when it's plugged into your computer um the usb kind of has the highest priority because it needs
to be serviced so um when it's plugged into your computer, the IOs kind of are a little bit wonky sometimes.
You're trying to push things real hard.
So it's better if you just kind of attach things that are bus-based to it where it doesn't have to manually refresh everything constantly.
Well, one of the advantages to having an external servo chip is the ability to run it at a higher voltage.
I mean, motors don't usually run at three volts.
And if you try to run a bunch of motors from your five volt USB, I hope you didn't really
like your computer that much.
Well, yeah, that's one of the benefits here.
The servo shield also has a nice little feature in that the VN rails actually shared on it.
So with using the servo shield, what we found is you can plug this directly into an RC car that has like an ESC.
And those ESC controllers, electronic speed controller, I think that's what they're called.
They actually output voltage instead of taking it.
So literally, you just plug one of those servo lines into the servo controller shield and bam, it powers the whole system.
And then you just plug in your steering servo, and you're good to go.
You've got your robot that can see.
That makes it so much simpler.
I mean, that's really powerful.
It's more than, I mean, it's almost a robot platform.
It seems very much like a robot platform.
I considered renaming it.
When I first saw it, I thought it was a smart camera.
And since I've been working on a robot, I have been, and since my robot has this massive controller, I was thinking, oh, it's a smart camera.
These are the things I can do with it.
But it wasn't until I read more and spoke with you a little bit that I understood it's the controller.
I mean, it can be a smart camera, but it makes more sense as the brains of the whole outfit.
Yes.
So a lot of folks actually ask us about that.
They want to use it as a smart camera, and I'm kind of flabbergasted a little bit by that.
I spend a lot of time in Python OpenCV on the Jetson, and it's sort of amusing, but it's kind of annoying.
I wish I could just say, find the the laser or find the blob or
find the find the line well a big thing is do you have examples to work with sure i have all of open
cv examples to work with it's kind of but they're installed for you uh a lot of them yeah okay well
because i mean the jetson is not an embedded system i mean they they say it is an embedded
system but it's crazy.
So it's more like having a PC.
So what we do for OpenMB Cam is we've got the Arduino-style example setup.
Literally, there's a folder,
you click examples,
and it just has like 80 examples.
And the examples just show off
the machine vision algorithms
in a kind of stupid script.
Like, here's the color detection one.
You just point it at colored objects
and it tells you where they are.
And it's literally five seconds of effort, and bam, you technically have your machine vision component part done.
Then all you've got to do is write up the Python code to control things and you're good to go.
That's so amazing.
When did you make a decision to say, we need to build an IDE?
Was that always part of the plan?
So it was Ibrahim's original thing.
So Ibrahim put this project up. Again, Ibrahim is my
co-founder. He's not available today, though, because he's in Egypt. That's where he lives.
Anyway, but it was his idea to put the IDE in the first place. I would have ignored the
project and just said, this is... So again, let let me back up i found his project on hackaday i was uh bored one day and was just looking through hackaday articles
and oh that's dangerous well hackaday io yes yes and everybody puts their projects up and and then
they it goes from being a mysterious faceless widget to being a person creating things that you want to help yeah yeah and so you wanted
to help and here you are yes yes uh-huh it's been an odyssey um so we start he started the project
basically we linked to this his first blog post back in 2013 um basically uh it's on our about
page on our website um he pretty much much wanted to create a better camera module.
So, a little bit of history about me first before I go into this story.
I did something called the CMU Cam 4 back in college at Carnegie Mellon.
This was a, so a lot of people have heard of the CMU Cam.
Yeah, I think I heard of it when I was doing some stuff.
Yeah, it probably just didn't sync up at the right time for me. Okay. But he's heard of it when i was doing some stuff i yeah it probably just didn't sync up at the right time
for me okay but so the the cmucam has been around since about 2002 and it was created by um my
research professor anthony rowe and um well it wasn't created by him it was created by a guy
named ilia norgerish i can't pronounce his name it's a little bit hard but um he uh had anthony
rowe build this uh little camera module. And it was cheap
in the sense that it costed only 150 bucks back in 2002. And it had a really bad camera sensor,
not very good at all, but it could see colored objects. And it was the first time anyone with
a microcontroller platform, and this is before Arduino, back in the basic stamp era had been able to actually have a microcontroller that could see
anything. And there was a CMU Cam 2 that followed on and was very popular in FIRST Robotics. And
then a CMU Cam 3 where they, the CMU Cam 3 was actually quite similar to the OpenMV Cam. It
could do Lua scripts. It had a little eye editor. It had the ability to detect faces.
It had an SD card.
A lot of similar features.
However, it had one bad problem.
It costed $300.
And so that could have been a really popular system, but it was too expensive.
Because back in that time, you couldn't make it cheap enough.
In particular, the problem was it had to have a FIFO buffer memory to store the image.
And these FIFO buffers cost an arm and a leg.
You can still see them around.
They're like these weird FIFO memory chips that can store like one megabyte or something.
But they're super expensive.
Anyway, so when I was an undergraduate back in school, I started working on something called the CMU-CAM-4.
And this was a successor to the CMU-CAM-1, effectively.
It was about $100, $99.
And it had a propeller chip on it.
And it could look at a 16 by 120 image and track one colored object.
It wasn't very powerful, but it had no bugs.
No one ever put a bug on me in terms of the firmware being broken.
However, also people barely knew how to use it, though,
because by that time, education level for people actually being willing
to read the manual had gone down to epic low levels at that point.
I think it's always been that way.
I learned never, so here's a tip.
Don't ever write a PDF and put it online.
Just make a webpage.
People don't read PDFs.
I really wish they did, but they don't.
Especially a lot of customers are, you know,
or a lot of their customers don't speak English.
They can't, you know, do Google Translate on the PDF.
So put your documentation online.
The last time we sort of gave up on everything
was when we tried to name everything Read Me
as if to encourage people to read things.
When that didn't work, I think we all just gave up.
I read things.
You're a minority.
I read PDFs.
I know, I'm a minority. I know.
I'm a minority.
Okay.
So CMU cam 4.
Yeah, yeah.
So, yeah, let me continue.
I'll get to the, I'm a long-winded person.
Okay.
So that system could track one color, and it did it really well.
It did 30 FPS fixed.
You know, we only sold about 1,500 lifetime, but yay, it worked out great.
I learned a lot of stuff from doing that.
And so here comes Ibrahim in 2013, and he saw my project and he said, you know, I like it, but it's expensive.
I want to make it cheaper.
That's always the goal of a hobbyist.
How can you make this cost $20 and not have any money off of it to actually build a second one?
But he wanted to make a cheap system that can do computer vision,
but could be programmed pretty much.
Like the CMU cam is a fixed function device.
You bought it, it did its thing.
You talked to it via serial,
you didn't ever reprogram it really.
And so his system was using a much stronger processor,
could actually do more than just color detection,
could do face detection,
because now it could store the image in RAM,
since it had around 256 kilobytes of RAM.
And that's what I saw on Hackaday.
I saw that platform.
And I saw another big thing was he had put together a Python IDE.
It kind of had a lot of problems.
We never really fixed that.
We kind of moved to Qt, because I was an IDE developer in that
and I could do a good job.
Good choice.
Well, it was surprisingly,
Python is great to write an IDE in
if you don't feel like having any users.
If it's just you.
No, I got to be serious.
So one of the things you got to think about
if you're writing software.
For other people. Yes, is how to deploy serious. So one of the things you've got to think about if you're writing software. For other people.
Yes, is how to deploy it.
Because these things ruin you.
Lib USB, I want to say I like that library.
It was really good for getting OpenMP Cam working initially.
But you can't deploy it.
Getting it to run on Mac or getting it to run on Windows is a nightmare.
Just absolutely a nightmare. Just absolutely a nightmare. And one of the big problems is that you need to have signed Windows drivers
and not like the easy
.inf Windows drivers.
You need to have actual code executed.
And so this requires
a big, scary amount
of Windows certification
on your part.
It's expensive.
Yes.
Yeah.
For example,
you have to buy these things
called digital verification.
I forget the name of it.
DigiCerts or something. But they cost, you have to buy these things called digital verification. I forget the name of it, DigiCerts or something.
But they cost, you have to actually get a hardware dongle key that plugs into your computer.
Like, they're really serious about this kind of stuff.
And you're going to end up paying around $1,000 to $4,000 to get it because, you know, you're programming software that's going to install a driver into the kernel.
So, you know, they want to make sure you're doing the correct thing.
And that's the kind of stuff you might run into if you're using lib usb so if you're a
hobbyist or someone who's trying to bootstrap a company with no money that's kind of a little bit
unpalatable but cute or or cuties people say it incorrectly yes um as i always say it's cross-platform
and it makes this so much easier.
Yeah, in particular it has a cross-platform
serial port driver for you
which we were able to switch
so what we did was
our OpenMB cam library was originally written for
libUSB and then we basically
stuck that onto a generic
virtual serial port driver
and then because of that it looks like a serial port when you connect it in one's computer.
And then we were able to use that with Qt to create a
disk that talks to it like it was a serial port, pretty much.
Sure, that makes sense.
So that ended up building an IDE, which is what
I can program into, which is much easier to use.
As we were talking about, it's sort of like the Arduino one where it kind of simplifies it all.
I say that's important because if you look at MicroPython right now, so the guys who are maintaining it have done a great job.
They, though, believe that people want to use RELP to program program it with and i'm not sure if that's
correct um rope i mean this sounds like something scooby-doo would say uh well maybe that's not i'm
probably pronouncing it right it's the read evaluate print loop so repl repl is the better
okay yes yeah i'm pronouncing it wrong we're gonna have to flip it around so it's the other way
though i like it better yeah i just make up words when i see it and
then i just that's just that's why how i think about it okay yes um okay they think you want
to type into it all the time yeah but that's well a lot of the examples they show are like hey you
can type into it and program it via that way and that's cool um but that doesn't scale beyond the
example of showing people that it's cool.
No, that means I can do it, but I can't ship it to other people.
Yeah.
But no, I mean, like, literally, it's you program in a line, and then you have to program in another line, and you can't go back to the previous line.
Right, right.
You just program in one line at a time.
So that really doesn't scale to you writing a huge program that's
you know 200 lines or a thousand lines if you need to you need to have that nostalgic feeling
for 1980 yeah which really wasn't what i wanted to do with my life anyway um well well see here's
the here's the alternative they have such you can um you can save a file to the system and it'll
execute it but the problem with that is um you have no way of knowing your file had a compiler until you execute it.
And you can get that result then on the serial port.
But having the editor, we're able to send the file to the camera.
It does the compile and then spits back out if an error occurred. Our IDE parses the text message to find the error and then literally opens the file with the error, highlights the line, and then says what the error is of a little
dialog box. Just like an Arduino, well, even better, it tells you exactly what line of code
you have a problem on, which makes your development experience a lot simpler.
So much better. Okay, so I want to go back to my idea of if I was using this.
Okay.
I did some laser chasing so that my robot would follow a laser around.
And how would I do that on OpenMV?
Okay, so our biggest feature, I think, which has helped sales the most is color tracking.
That's what a lot of folks... Which you've been doing for a long time.
Yes, yes.
I've been doing that for a long time. That's what most of these camera systems have been doing so we've got that
um we have a function a method i gotta call it method because you know it's it's bound to an
object it's not a function which doesn't anyway um you can use whatever words you want whatever
words i want we got a thingy that um a subroutine subroutine go sub yeah um no so we've got a method called find blobs
and basically you give this uh a list of colors that you wanted to track the color thresholds
those thresholds are again min and max of um basically like yuv uh min and max of lighting
and then min and max of the color channels.
And in order to find those, by the way, we realize it's hard to find those values.
So we have a built-in slider.
Oh, nice.
So you can basically, in our image, you can click a button,
and it'll show you a picture.
It'll capture a frame from the camera and show you a picture of that image, one with the actual picture and then one of a black and white image
that's of the thresholded pixels.
And then there's six sliders, and you just drag the sliders around
until you have just the pixels
you want to actually track.
In OpenCV, they take
HSVU, hue saturation value.
And with red, it's kind of
hard because it's on the edge
of a discontinuity.
So you have to do both sides, and if you're doing
min and max, then that totally breaks min and max if you're around a discontinuity. So you have to do both sides. And if you're doing min and max, then that totally breaks min and max
if you're around a discontinuity.
And it's super annoying.
And that sounds so much better
because I always knew what I wanted to find,
but I couldn't necessarily translate that
into all the things until I tried.
I mean, I must've tried like 25 different options.
Yeah.
That's our goal is to save you time.
That's like why anyone would buy our system
because it's not the most powerful, certainly.
But it's hopefully the most easy to use.
Obviously, you can't infinitely scale it
for every application,
but the idea is for most people,
it'll do the job you want.
Okay, so find blob on color.
Yeah, so find blob on color. And by the way,
if you have that discontinuity,
that's a mouthful.
Anyway, if you've got that word,
you can track two colors
or three or four thresholds at the same time.
So you just input
the color thresholds that you want for one part
and then the color thresholds you want for the second part.
And you can pass the function
a parameter called merge blobs
and it'll literally merge those
detections that it sees that are next to each other
into one color blob for you.
Anyway, and so
what that will return then is a list of blob
objects. So there's a Python list you can do
for b in list
or whatever and that's your blob.
So simple Python iteration. and each of those blobs
then encodes the centroid which is the center position of the pixels in the blob the min and
max bounding box around it so this is like the box that surrounds it we also compute the linear
regression on the blob and this returns basically an angle So if the blob is like a pencil, it'll actually tell you the rotation of the blob.
And we also compute the number of pixels, obviously.
We count the number of pixels in the blob so you know how many pixels it is.
And that's useful for density checks.
So if you do things like divide the number of pixels by the size of the blob,
that'll tell you how dense it is.
And a density then usually is a good detection.
Like basically, if you're looking for like a round object,
the density should be rather high all the time.
And if it's not, then it's probably not the thing you're looking for.
Well, and for me, for laser chasing,
if I had a round object with a good high density,
I could have something in the same color space
that was a reflection of my laser to another part of the wall.
And that would be much less dense and maybe not the right shape.
It would have fewer pixels.
It would be less dense.
It would be the wrong shape.
And so I could eliminate that.
Yeah.
That's one of the big things.
And that's what's really powerful about the OpenMB CAM,
is because we return to blob objects, and because you're in Python,
you can filter and sort things.
You know, actual computer programming, you can do that on the system,
because this is where things usually break down with doing, like,
computer vision on a microcontroller or Arduino,
is you can't manipulate things.
It's just too hard and see.
And see, I'm getting like really animated right now
because this is really the important thing to understand
is because you're in Python, you can sort objects,
you can filter objects, you can decide,
I want to look for the maximum object
that has the most pixels set inside of it.
And it's one line of code
versus being some weird function call
where you've got to set flags and other things and, you know, makes your life harder.
And you can do things with it.
I mean, once you find the centroid, you can say PWM my motor to here or there's...
Yeah, so we have a servo.
Things like inverse kinematics and other things that are mathy?
Yeah, you can.
So the thing has a double precision floating point unit.
So you can use doubles.
So let me back up.
You can use doubles in C land.
You can't use them inside of a micropython because everything's afloat.
But in terms of that, we actually have hyperbolic tangent support.
There's basically every single complex and non-complex.
And by complex, I mean like, you know,
A plus IJ or something.
You have all those math functions.
So you've got the full library,
full desktop library of actual math functions to work with.
And you can have scripts that are thousands of lines of code.
Of course, there is a limit on your script size
just gets compiled into a byte
code object and that byte code object does get stored on the heap so there is um you know a
practical limit on how infinite yes it's not infinite it's big but not infinite yes okay um
but yes you could do inverse kinematics if you wanted and if you found that you know hey you
wanted to actually support the project you could then submit us a library that was written in C for inverse kinematics.
And then we could put it in the firmware.
And then that code cost wouldn't be something you'd be incurring in Python.
Well, it wouldn't cost you anything in Python land then.
And you said I could recompile.
I mean, I don't have to submit it to you right away.
I can do this myself if that's what I want.
Yeah, all you need is a Linux system.
The only problem, so here's one thing that's also important to appreciate between C and Python world. Our firmware image is about 1.5 megabytes right now. Okay. How big is your chip? It's a
two megabyte flash. I see. But this is arm thumb. Okay, so a lot of people will look at and see the
expressive chip and see that it has like an eight megabyte chip in there you got to understand though it's using 32-bit instructions and they
aren't as optimized as arm instructions so like the same flash goes a lot less way a lot less
over there that's a good point um yeah arms you know because we can build shifts into every
instruction um with the arm processor but um, anyway, my goal in saying that was,
it takes a long time to program in C. So if you're going to go in C land, just be prepared
that this is actually, you'll be in the area where this is actually how long it takes to
program these systems in the real world. And you'll appreciate the difference then.
You're still loading firmware over the serial USB bootloader. You're not
J tagging it in through a programmer.
So we have two methods.
We actually have a built-in USB bootloader that allows us to reprogram, and it goes quite quickly.
It'll erase the memory in about 10 seconds and then write 64-bit, 64-byte chunks repeatedly to fill the image up. The thing is, though, that still takes on the order of about a minute to program.
And, you know, that's what you incur every time you want to recompile and change anything
in C-Land, even if you're running a far build system.
So, you know, me and Ibrahim, we suffer so that you don't have to.
You guys think that's long?
Oh, man.
I'd kill for a minute download times.
But your other programming method is through a JTAX?
So we have the SWD pins exposed.
Yeah.
So you actually have full ARM debugger support.
Okay, good.
So, I mean, yeah, you have our system, OpenMB CAM.
We have the SWD debugger support. Okay, good. So, I mean, yeah, if our system, OpenMVCAM, we have the SWD
debugger pins exposed.
If you are a real professional
and you know how to use
those tools,
you can totally be in C-land,
work with the system in C-land,
and hook up a USB debugger
to the system,
reflash it through that,
do breakpoints,
whatever.
It's fine.
It's good to know.
I mean, because sometimes,
I guess one of my fears with going with OpenMV for a professional situation would be what if it did 95% of what I wanted and the last 5% was somehow impossible given the current constraints.
And knowing that I could reprogram it, that I could do modifications,
I could tweak what I needed is important. But that brings me to my other question,
which is probably a whole, gosh, a whole nother show, really. How do I learn enough about computer vision topics to be able to go beyond the surface of OpenMV?
Yeah, that is something I don't know how to solve,
and I really wish I did.
Pretty much, I guess, what I'm trying to do
is just make don't fail type of, I mean, sorry,
you can't fail type of applications available for the system.
Like if you want to do barcode detection,
our feature on the system for that is kind
of a can't fail level you you really it it would be hard to build a system where the function did
not work as long as you got the focus right on the camera for like looking at the barcode otherwise
it's it's stupid simple for actually doing really more complicated things our goal is really to try
to hide complexity as much as possible to give you
the highest level feature you want. So I recently found out, for example, that Google has something
called mobile nets, which are a Google machine learning. These are a neural network that Google
has trained that fit on your phone. And the smallest of them, which has the worst accuracy,
that said,
but the smallest of them
can actually fit on the OpenMP Cam's flash.
And so it is possible in the future
to give this system a neural network.
Now it would have incredibly terrible performance.
It would be basically only 60% right
in the top five categories,
as in if you pointed at an object, 60% of the time,
it would have one of the values in the top five things
that thought the thing was be the correct object.
But it's still possible.
I'm not sure if that performance increases
if there's only one object in the field of view
versus a bunch of different things.
But that'd be the kind of thing we want to integrate onto the system.
And so you wouldn't have to learn
about how neural networks work.
We would just have a function called find stuff.
And you would point it at stuff
and then it would tell you what stuff is.
I like that.
But I also, how did you learn
how to do the computer vision stuff?
Yeah, that's what OpenMVCam is teaching me.
Oh, I see, I see.
This is how I'm doing my uh graduate degree in computer vision all right
all right i understand that i do i really do um yeah if you really want to learn more about
computer vision i guess the thing to do would be to actually okay so i've learned a lot of stuff
from reading through other people's code um if you read through the c codes that the c library
that we actually use
to do a lot of the stuff,
in particular the April tags function,
it is ghastly scary though.
It's 15K, just one file of code.
That was actually on purpose.
So we have a lot of different things
we're incorporating into our library.
I'm about to start code reviewing you.
Yeah, yeah, yeah.
So I actually manually took
the April tags library when it's distributed, comes into about like 60 different files. library start code reviewing you yeah yeah yeah so i actually manually took um the april tags
library when it's distributed comes into about like 60 different files and i took that and then
merged it all into one big file and that was to simplify our end because on our library we have a
bunch of different files then and we didn't want a huge library um floating around in there um but
if you look through that code it spans everything from
doing the basic image um operations to very very high level math and matrix operations
like doing matrix stuff then that's way beyond my level is really um a huge bulk of where you
get into really advanced things like being able to invert not invert matrices but um single vector decomposition
svd yes um i do not know what that's useful for i'm not that smart i really wish so
but i should study it i hope it's pretty cool and i have a i actually have a website i love
sending people to so i will put that in the show notes about singular value decomposition and how
useful it is they've got even more there's's actually, in the April Tags code,
there's pretty much a complete library
for every matrix equation you want to do in that code.
Because what they're doing is when they detect a tag,
the tag is kind of like rotated in 3D space.
And so they have to undo that rotation.
And that's all matrix math.
Yes, it's all matrix math.
And it's neat math.
And yet I want to say that's not computer vision.
But that is computer vision.
But it is computer vision.
It really, really is.
And I've been learning stuff by reading the OpenCV PDF manual,
which comes in a web page as well, but I don't like
webpages, I like PDFs.
And that has been helpful because I can try stuff and use it, but it also is huge and
I don't feel like I'm ever doing it right, I'm just following examples.
So at some point, somebody's going to tell me how to learn all of the computer vision,
preferably overnight, matrix style. Well, I think, have you seen PyImageSearch?
No. You haven't? Okay, PyImageSearch is this website that just goes through how to do OpenCV
stuff. And they're really good with teaching you how algorithms work and doing step by step a difference between me and them would be in pi image search they're trying to
actually help you understand what's going on versus i'm just trying to give you
the easy i want to do something i don't care sometimes you need one of each i mean usually
and i i suppose it's py image search and it's not that you're searching for yummy, yummy raspberry pods.
Oh, that's what I meant.
I meant the pie is in Python.
Oh, no, I got it eventually.
I just was, I guess I'm focused on pie and not the right kind.
Okay, one more question.
I'm sorry, this is going to be a little short.
We have a couple other things we have to do today, but have you tried using this in stereo?
In stereo?
Yeah, we're actually looking to build a product we were thinking of,
which would just be like a giant PCB that would have nothing on it,
where you just mount two cameras side by side.
And there you go.
That'd be a stereo camera.
We never got around to building it, though.
Ibrahim's done some prototypes on it.
It just hasn't been a priority.
But it is something we could do.
And you'd have one camera as like a master,
which would request
a second camera to give it an image and then that master camera then would actually be able to find
which pixels were out of alignment with the slave camera yeah the secondary camera you wouldn't have
to have it be as fast or as accurate since you're looking at probably close-up stuff, it doesn't have to be as granular.
I mean, as you were saying,
what you need resolution for is faraway things.
Yes.
And you don't do stereo on faraway things at that point.
It's not useful.
Well, we also looked into,
what we really wanted to do was build a time-of-flight camera.
Yes.
There are folks, like TI, for example, was selling a time-of-flight camera. So there are folks, like TI, for example,
was selling a time-of-flight image sensor,
and it literally was a spy bus based,
you know, single bit spy.
It would do about 80 by 60 pixels,
and you would just literally take a picture
and then use a spy bus to read image data off it.
And we actually built a shield for the OpenMV cam,
and we tried to get that to work.
The big problem was those sensors need
like four to five amps of current.
Not the sensor, but they need an LED flash array.
And you need like four to five amps of current
to get that to work.
And suddenly that's no longer,
I can plug this into my USB port safely.
Right.
Okay, so I'm interested in stereo although i maybe i
need to play with it myself and i i am getting one of these right okay i know you're not leaving
that one but i'm well we'll see if you leave here with that i could leave it
um how much are they and and where can people get them and all of that?
Okay, so they're $65 right now, the current version.
This is the M7 version.
You had some older ones for a little cheaper.
Yes, those are gone now.
Oh, okay, so M7, $65, cool.
Yes, we want to get the price down to about $45.
That's our target.
Our current production run capacity is about $2,500. And so our next production run we want to do on the order of $3,000 to $3,500. And I think once we break the 5,000 unit build, we'll be able to actually sell these guys at $45 each.
Exciting. We also plan to offer a higher end system at a higher
price than kind of
half two market
segments but at
least for now our
goal is to really
just up the
production quantity
of the M7 and
make it cheaper and
cheaper so everyone
can get one.
And they're
available on our
website.
We just got a
thousand back from
the factory and
we're waiting for
our 1500 more.
That's
openmv.io and of course that
link will be in the show notes but they're also on spark fun we have a whole bunch of distributors
spark fun bought 500 cameras oh how exciting they really believe in the product um they actually
moved 200 like in 10 days when i first when they first put them on their website and so they got
500 more um we've got a bunch of other distributors like Seed. And Tindy. Yes, I think someone was selling them on Tindy.
We've also got Chinese counterfeiters. Those are on our website too.
We link to them.
That's one way to deal with it, I guess.
I mean, your boards have the Open Source Hardware
Association logo. It's open source hardware.
So are they really counterfeiters?
So you are allowed to make the software open source.
However, the counterfeiters, I want to say, don't contribute necessarily.
Well, one of them does.
We came to an agreement to give some money back to the company so we can continue software development.
We're fine if you're building the system yourself.
The big thing for us is you selling them and then us having to service people.
Yeah, supporting the people who aren't supporting you.
Yeah, so that often happens is we'll get boards come out and then people have problems and
they didn't actually buy them from us, but we're answering questions.
Yeah.
And that's a good way to kill a small business because it just sucks all the time and you can't do that yeah but for all that this
is a small business and getting bigger and you're spending a lot of time on it this isn't your day
job this is a side gig yeah yeah i don't have a girlfriend so i have free time, I guess. That's sad.
So sad.
Anyway, no, I decided to pour all my effort into OpenMV.
It's brought a lot of blessings to me in being able to meet people and help improve my life and so on and so forth.
I found a lot of enjoyment through working on a project that has a particular goal in helping people.
Yeah, but it's a side job.
It does not pay the bills.
I have made probably negative money so far on OpenMV.
Did it help you get your current job?
Yes, it did, though.
Okay.
It helped me get my current job in the Valley.
I was able to meet people who helped put the connections in
to get past the resume filters.
And I had a thought there a big thing
with doing the OpenMV
cam is it's helped you know keep my skills
current for
like my day job is FPGA
development so it's Verilog
and
you know your C skills, your Python
skills, your
business management skills, your production skills.
Yeah, there's so many things that would kind of atrophy
if I just was focusing on one language.
That would be really bad.
But for OpenMP Cam, I've had to do stuff like
learn PHP to set up server scripts.
I'm sorry.
There were PHP scripts that were just copied from other examples.
But do that.
Learn how to set up a company, deal with LegalZoom, how to do bank accounts, what an ACH transfer is, learning how to do production runs, how to get distributors, how to actually sell.
How to take credit cards.
Yeah, yeah, Shopify.
That's the answer.
But there's so many different things that you're able to work on and keep track of.
And it's a nice learning experience running your own company.
The trick is to never take money out of the company until it's ready.
That's the big hard thing.
Like OpenMV technically has some profits this year, but we have to spend all our money on another production run.
So if you take any money out, you can't keep growing it. And that's one of the things you got to keep in mind if you're trying to do this kind
of thing and that's hard because you you are you do have a full-time job this is essentially a hobby
that sucks money may someday make a little but still at right now sucks it sucks money. It sucks time away from girlfriend, life, fun,
probably even exercise and health to some extent.
You know, there's all these other things you have to balance in life.
And yet, when you talk about it, you're happy.
Yeah.
That's basically what it said.
Yeah, it makes you, it's your own little thing you can control.
And, you know, you're kind of the director of the ship.
And that's a nice experience.
And also, it's nice that people want to talk to you and want to get you on their show and so on and so forth.
Because you're doing something.
You're not sitting around at home.
You're doing something that makes a difference in other people's lives.
Yeah.
All right.
I think this is a pretty good point to start winding up.
Do you have any questions, Christopher?
I had deep questions about architecture, but I can ask him after.
Okay.
Fair enough.
Corbina, do you have any thoughts you'd like to leave us with?
Okay.
It's been great being on Embedded FM.
They have great microphones here.
I was told to get a better microphone.
But yeah,
this is a great experience. I've enjoyed talking to you
all. And I'll actually
be at, I'm going to
be at the Open Source Hardware Conference
later this year, and then at
Hackaday
Super Conference.
Even more later this year. And I think I'm also going
to be at the SparkFun ATV, Autonomous Vehicle Competition.
Well, that's great.
We had Alan Yates on recently.
He's going to the Hackaday SuperCon in Pasadena in mid-November.
Open source hardware.
I know we have lots of friends.
We're actually sponsoring that and sending one person.
And there'll be stickers in your goodie bag from Embedded FM.
So, cool.
It sounds like all three will be fun places to meet you and say hello
and get a look at this camera, which is pretty cool.
Okay.
Our guest has been Kwabena Adjeman, FPGA engineer at Planet.
He is also the president and co-founder of OpenMV.
If you'd like to buy an OpenMV or one of its nifty shields, and one of its nifty shields, really, look on OpenMV.io.
A link will be in the show notes, of course.
Thank you so much for being with us.
Thank you very much.
Thank you also to Christopher for producing and co-hosting.
And of course, thank you for listening.
I have said before, I think I said last time, I'm looking for a new gig.
Ideally, something mathy or robotics-ish.
Maybe something that makes me learn all of this computer vision that I want to play with.
If you know of somebody needing a strange set of embedded systems
architecture, prototyping, code implementation, and personal communication skills, please keep me in
mind. And now a quote to leave you with. This one from Maya Angelou. What you're supposed to do when
you don't like a thing is change it. If you can't change it, change the way you think about it.
Don't complain.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are Logical Elegance and listeners like you.