Embedded - 48: Widgets on the Hands of Ants
Episode Date: April 23, 2014Dr. Kevin Shaw, CTO of Sensor Platforms, spoke with Elecia about his career progressing from designing MEMS to building a company that makes sensor fusion algorithms. Wandering from the Internet of Th...ings to Singularity University to power management in Android development, Kevin and Elecia had a wide-ranging conversation. Due in July, check out Sensor Platform's Open Sensor Platform project, an open source framework for developing sensor systems (sample timing is critical!).Â
Transcript
Discussion (0)
I'm Elysia White.
Welcome to Making Embedded Systems, the show for people who love gadgets.
This week, Dr. Kevin Shaw is here to talk about sensors and MEMS and motion.
Hi, Kevin. Thank you for joining me.
Oh, you're very welcome. Glad to be here.
I saw you speak at the ST MEMS conference last fall.
It was about the Internet of Things.
Yeah.
And then about a month later, I saw you at the MEMS executive shindig in Napa.
That was enabling a smarter world.
But, you know, those conferences are times when everybody's networking and being on their best behavior.
We all smile.
Right, exactly.
I'm happy to have you just to chat.
Sounds good. I'm glad to be here.
What's your background?
Well, you know, I think mostly it's about MEMS. I started in 1990 and went to Cornell to study with this new world of microelectromechanical systems.
That was in the days when the cover of Scientific American
had a picture of an ant holding up a small gear,
and that was really cool.
So I got caught up in that,
and we started doing a next-generation MEMS process design,
finding new, really low-cost ways to make mechanical structures.
You know, it's sort of like machining that we do in the big world,
but now we can do it on micron-level resolution.
And interestingly, we can use this really amazing material called silicon.
I mean, we hear about silicon all the time for semiconductors.
It turns out it's a phenomenal mechanical material with a lattice very similar
to diamond. So it's very strong, very brittle. Well, it is brittle, but it's also very strong.
And so you can make great mechanical structures with it.
And these are like the canonical diving boards associated with accelerometers.
Yes, exactly.
And it turns out the mechanical structures you can make are outstanding for inertial sensors like accelerometers or even gyroscopes.
That's the tuning forks for the gyroscopes.
Yes, yes.
Do you see any new MEMS technologies coming along?
Not the process, but the sensors.
Right, right.
We know there's hundreds of different ideas and designs that go around with it.
From a sensor perspective, inertial sensors using MEMS, the accelerometer, the gyroscope, have been very powerful.
But we've also seen pressure sensors, barometers, microphones.
Those are also using sort of MEMS technologies. They're using sort of like drum heads, if you will, if you think of a
drum that you use in music. Same sort of a diaphragm technology. But then you also have
some of the things like bolometers, where you measure infrared absorption. So you can measure
infrared heat by making MEMS structures too. Why infrared and could you
do everything? Is this how I'm going to get to my mass spectrometer? Well, actually
I've seen some really cool new camera
spectrometer technologies just coming out in the past couple months too.
But a bolometer is really nice because it lets you
hold up a diving board with basically a thermal sensor on it with an air gap underneath it.
So it absorbs infrared light that turns it to heat energy.
And you can really just measure temperature as an indicator of infrared absorption.
Okay, and you didn't mean an actual diving board.
This is back to...
Yeah, tiny little 10 micron trampolines, if you will.
Okay, walk me through it again.
I got stuck on the diving board and I was trying to figure out...
So think of a trampoline that you put out in your backyard.
It's held up by springs, but there's a big air gap underneath it.
The nice thing about that is you have basically insulation from all the air sitting around it.
If you put a thermal sensor on top of that, it can absorb heat energy, infrared for example,
and use that as an indicator of amount of infrared absorbed.
But if you put that on top of something like asphalt concrete,
you measure the temperature underneath it,
and it tends to obscure the signals from different pixels if you will and so having
thermal isolation between hundreds of little pixels of diving boards or trampolines gives you
a pixel-based resolution for a friend i know that probably didn't make sense at all no no i'm
thinking about that all of the times the pixels end up being different sensitivities on CCD cameras and how that's a thermal problem as well as a noise problem.
Very similar, yes.
And so I'm sort of getting what you're saying.
Yeah, yeah.
And so, yeah.
But now you work at Sensor Platforms.
Yes, yes. When I finished doing my MEMS work, I went to a small company called Kionix,
and we did put MEMS into high-volume production, which is a real kick.
It's fun to bring up a MEMS fab.
I spent a decade doing that and then switched over completely
and went up a level on the ladder and started doing algorithms and software for MEMS.
In the meantime, a really interesting gentleman named Steve Jobs decided to put an accelerometer in a phone.
Crazy guy that he was.
And then suddenly everybody had to have one.
And here we are.
I think those Wiimotes get credit from me more than Mr. Jobs and the phone.
That's true.
The first time I could say, well, you know how your Wiimote does this.
And they're like, oh, inertial's not totally boring.
I've got kids at home that are jumping up and down on the sofa right now.
Actually, as we speak, it's a day off.
And so they're applying on the Wii U right now.
So you went to Censored Platforms.
Yes.
And your CTO there.
Yes, yes.
And what do they do? Well, you know, it took me a while to figure And you're CTO there. Yes, yes. And what do they do?
Well, you know, it took me a while to figure out what a CTO does.
It's probably the coolest job on the planet, but I'm somewhat biased.
My job is I get to understand technology, talk to people who understand technology and explain to them some of what we've seen in the ways of technology.
But I also get to talk on the business level.
There's a lot of people that want to do something really cool. They have an idea what they want to do, but they're not really sure how to do it. And so I get to sort of have one foot
on the engineering side and say, I understand what can be done here, but also aside on the
business side to say, well, you know, if this is what you want to do, if this fits your product,
these two can be done together. And so it's sort of bringing the two together, and I really enjoy that.
It's a communications as well as technical and all of the other pieces.
Which is really odd, because I'm an engineer, and I'm not really supposed to be able to communicate.
Yeah, I keep hearing that theory, but I keep meeting some pretty articulate engineers.
I'm not sure that that's really going to stick forever.
And so you do go to a lot of conferences.
I actually really enjoy that because I get to talk with people about technology and what they're doing.
I get to tell them about what we're doing.
And then that brings people up who say, well, can you do this?
And those are the best questions. You know, somebody
has some harebrained idea that turns out to be really useful. And so that's how you get great
ideas. You know, somebody says, can you do this? And you're like, wow, I never thought of it. I
think we can. And those are the best conversations to have.
Those are tough. I think I'm more open to the, that's a great idea, let's think about it. But for the first, there was a time in my career when all new ideas I just said no to. And I think that that is an engineering interesting question, because when I was working to bring up the MEMS Fab at Kionix, I started to get to the attitude, I had a lot of tools that are brought
up, etchers and wire bonders and dicing tools and whatnot. And after you've tried a few things and
you know what doesn't work, when somebody else comes up and asks you again, it's very easy to
say, no, I've tried it. It won't. Done. Period. You can't do it. But that's sort of really close-minded.
On the one hand, it makes you feel good, like, well, I have an answer.
Yes, I have an answer.
I have an answer.
That does feel good.
And I'm the knowledgeable one.
Until somebody ignores you, goes and does it anyway, and you realize you did it wrong the first time,
that sort of humility is just really tough to bring on.
And so, I understand exactly what you're saying there.
It's very easy to get in the no world where I've done that, no, you can't do it until someone does it anyway.
Yeah.
Your role has changed.
Has it always been in management?
No, no.
I started off in engineering, and as I say, I crossed over to the dark side.
At sensor platforms?
I think my first crossing over to the dark side started out of Caliant.
Kionix was bought in 2000 by a MEMS optical mirror company.
Not sure if you remember those heady days when Telecom was going to take over the world.
Well, we got bought by one of those companies.
And so at some point there,
I switched over to business development, which just somebody said, what do you want to do?
I'm like, I've heard of business development. It sounds like fun. I don't know what it is yet.
And so I did that for two years. And I discovered that I really had no idea what I was doing,
that I better go to business school and figure out if I was doing it right.
I got lucky. And it turns out I was reasonably close,
and so I enjoyed it and kept doing it.
But that was about the time I think I sort of really crossed over.
So you did an MBA?
Yes, yes, I went to Stanford for their Sloan program.
Oh, neat.
So technically it's a master's in management, but, you know, it's pretty close.
Yeah, and, I mean, MBAs are a lot about connections,
and that Sloan program is a lot about connections.
I loved it.
It was so interesting because I went in there thinking, well, I want to meet another bunch of engineers and a bunch of finance types.
And they had so many interesting people from nonprofit and different worlds that I'd spoken with before, accounting and people that made steel rebar.
There's some people there from Cemex.
And there were fascinating different worlds, what they brought to the table of how they solve problems.
And it was just a really interesting conversation to spend a year with them.
Yeah.
I bet it was.
Wow.
That does sound fun.
Oh, it's funny.
We had one guy that was a CIA operative, and that was really cool.
He had some great stories.
You only had one guy you knew about who was a CIA operative.
Exactly.
Exactly. He could admit it. You only had one guy you knew about who was a CIA operative. Exactly, exactly.
He could admit it.
So getting back to the technology, so MEMS and process, which I kind of want to spend the whole show on.
Oh, it's fascinating stuff.
It's so cool.
You can make tiny little widgets and put them on the hands of ants. And then you can, sensors
are neat. When are we going to get to actuators? Well, yeah, it's funny.
Just last Monday, I was giving a lecture at Singularity University on MEMS,
trying to explain in 30 minutes everything that people needed to know about MEMS.
It was somewhat compressed.
Somewhat.
But actuators don't work well with MEMS because they don't generate much force.
There's only two things you can really move with MEMS.
They can move themselves, since the masses are so small.
But they can also move photons. Again, you've got the small mass thing going for you.
So moving things beyond that is really tricky.
Ant legs don't generate much force, but they manage to move a lot of stuff
around. They do, they do. And, you know,
they move themselves around. And so we can do that with MEMS.
But the other problem is we still haven't figured out, you know, muscle systems
in biological work really well over large translations.
Large translation actuators are actually really hard.
And MEMS tend to use capacitive forces, which are great over short range.
Beginning, you know, I was working at a XY translation stage for a DARPA MEMS device.
They wanted to do data storage, have a large, well, what seemed large,
100 micron by 100 micron field of dots, you know,
when the, you know, 50 nanometer dots that would be used for data storage.
But I, you know, try designing a 100 micron translation stage in X and Y,
and it's actually really hard.
And so, you know, we came up with some techniques,
but there's a lot of limitations.
Transducers or, you know, actuators for MEMS have been a tough field.
So, back to Singularity University.
That's over there at Moffitt in Mountain View in California.
That's been really interesting.
It does some great stuff there.
It's a strange-ish place.
There's futurists.
Isn't it great?
And the singularity is the Werner Wien singularity.
Yes.
The basic idea is, there's a great book called Abundance by Peter Diamandis, sort of the harbinger of the whole thing,
where the question was, humans are really good at linear.
If you ask me to extrapolate out 10 years, 20 years, I'm really good at that as a human.
But you ask me to understand things that exponentiate, and I'm really not good at that.
And so asking people to imagine how the world can exponentiate with, you know,
how do you imagine a technology that goes from 10 million smartphones to a billion in a couple of years?
Those numbers are just too big.
And so they wanted to design a university that was all about, well, what are the things
that are likely to exponentiate?
Let's imagine what that world looks like.
And oh, by the way, it'll be a lot easier when suddenly it happens.
And there's a lot of technologies that are doing that.
And MEMS is probably in that list.
Yes, yes.
I mean, I was making the argument on Monday.
It's, you know, we think, oh, okay, you know, there's probably a lot of smartphones out there.
Well, yeah, they sold a billion smartphones this year.
That's a billion with a B.
That's a lot of units.
Is that a lot?
Well, I only have one handset with me at any given time.
I only got, you know, one hand to hold it and the other hand to punch buttons.
Okay. So only a billion? Well, it and the other hand to punch buttons.
Okay. So only a billion? Well, there's 7 billion people on the planet. But what about wearable devices? I've already got one on my wrist here. I got a Misfit here that's got an accelerometer.
I can have Google Glass on, a camera and MEMS on my face. I could have one in my shoe. I could have an insulin pump on my side. I could have a heart rate monitor.
So I could have easily two, three, five wearables on me.
Okay, that's a multiplier. But now what if I go to Internet of Things?
We were already at the Internet of Things there. Oh, we are, and that's what people don't
realize, is we're surrounded by them. It's just not as many of them are wireless
as they're going to be in a few years. Well, and I find a lot of them to be quite
annoying to use. Yes, and I think it's going to be
well, that's a whole other topic about how interfaces have to really vanish.
You know, if you're going to be surrounded by a hundred things, I can't go up and start tapping on them.
They have to really sort of invisibly disappear into my life.
And configuration.
And how configuration battles security.
But I believe that's an entire rant that I have.
Oh, indeed, indeed.
Yes, security has always a much longer conversation.
But, you know, so if Internet of Things is, you know, a hundred of those for each of us, which is not unreasonable, then take your approximately 10 billion people, you're
up to a trillion sensors.
And we don't have enough fabs to make that many sensors in the next 10 years.
So the number of sensors, as you suggest, is going to exponentiate really easily.
Exactly.
But how are we going to get there? I mean, interfaces need to vanish. That's
great to say. Do you have some advice there or just?
Well, you know, it's interesting. I've got the Misfit on my wrist here, which I tap it and
little lights come on and it glows and makes it easy to understand. But I don't have a screen on it, really. I've pulled
the interface onto my phone. My Nest, I have a Nest thermostat at home. And it's got a pretty
little screen, but it's also easy to just, you know, I don't have to really talk to it much.
I don't need a keyboard. We're getting to the point where a lot of these devices are really
only one click or two click. And if I need to do something fancy, well, I pull it over to my phone.
So the interfaces on a lot of these things are, to some extent, becoming either vanishing or somewhat non-existent.
And maybe that's the way it should be.
I can agree with that.
Some of it, you know, getting up on my pedestal for a moment, is you need to have really cool software that intuits things and identifies what you need.
And so you don't have to spend much time telling it.
Absolutely.
I mean, think of all of the Internet of Things you listed.
Your Misfit is a pedometer.
I mean, that's its core functionality.
Yeah.
And it used to be if you wanted to have a pedometer that you could track your exercise over months,
you needed to look at it and write it down on a piece of paper or into an Excel sheet or however you wanted to keep the data.
And now that part happens magically.
Yes, magic is one of my favorite words.
This is the part where magic happens. Yes. That old Farsight cartoon. Magic happens magically. Yes, magic is one of my favorite words. This is the part where magic happens.
Yes.
That old Farsight cartoon.
Magic happens here.
Yes, I mean, this thing magically, you know, a couple times a day has a Bluetooth chat with my phone, uploads the data, and just keeps counting along.
But, you know, it's cool as it gets, you know, sleep patterns and things like that, which I never thought were interesting until you start taking a look at it.
It's like, oh, you know, I sleep better, you know, if I do this or do that.
And so, you know, understanding when I go on my morning run, I was most humored.
I'm not sure if you made it to Mobile World Congress.
I was wondering where my feet hurt until I looked at my misfit that told me I had walked 13 miles that day back and forth between meetings, it's kind of gratifying to say, oh, well,
yeah, maybe I will make the walk to the Starbucks instead of just going to get coffee down the
hall.
Yeah.
Yeah.
There is some, oh, I'm not, my knee hurts because I did all of these things, not just
because I'm getting old.
Yes.
And I probably better pay the extra for the really comfy shoes since I do a lot of walking anyway.
Yes, yes.
I discovered that at CES after walking eight miles that it was a good idea to get the better shoes.
See, Christopher?
All you needed was better shoes.
He's still mad at the time I walked him into the ground in Washington, D.C.
and he clocked up way more steps than he ever had.
Are you saying she's tougher than you?
Good answer.
Good answer.
So, how does the Internet of Things relate to what you do with your job?
Well, I mean, Internet of Things is basically saying, you know, I can sense what I can
see, what I can touch, or what I can hear. Internet of Things says I'm going to take those
sensors and distribute them all over around me, things that are out of sight, out of touch,
out of hearing. And I can pull in information from all those distributed sensors in some useful way,
some useful bit of information
for me. So I can, you know, have a car that will tell me, well, you know, you've only got two miles
left on the tank, even while I'm sitting in the house or, you know, worrying about that. I need
to go drive somewhere and it snows. I've got a 15 mile drive to my meeting, but I've only got six
miles of gas. Why is it not communicating that information to me?
Why doesn't it add it to my calendar and say, leave 15 minutes early so you won't be late?
Yes, yes.
And that's a whole other rant about how the automotive industry drags its feet.
But the Internet of Things is saying, I'm going to distribute sensors around me in a way that's useful.
And an Internet of Things device is fairly uniform across all of them. I have a sensor,
I have some sort of computational core, usually have a battery, and I'll have some sort of
wireless communications, and then I have some software to wrap it all together. So they're
pretty much all the same. It's just what is the sensor and where is it located? And how many sensors?
Oh, yes, yes. There's going to be more sensors. And which wireless
protocol are you talking? That one changes which
processor you're talking.
I think that's going to be a really interesting one because power
dominates. I think that brings us back to the embedded conversation that this all seems to wrap around.
Low power really makes a difference.
When I wear this misfit on my wrist for a reason, I could get one of the other brands that will last a week and I'll have to recharge every week.
But this thing goes four to six months.
That's a huge difference.
That's got a coin cell.
It has a coin cell.
So it's not lipo.
It's not rechargeable.
That's correct.
But instead of 30 milliamp hours in a jawbone, it's 125.
But I'm still getting way more than the 10x battery life that that change would allow.
So I may have more battery,
but I'm getting significantly longer life.
And I attribute that, you know,
I think most of the devices are using similar accelerometers.
You know, most of your excels are around 10 microamps
continuous rate these days.
She's smiling.
I'm quoting power numbers on your show.
I was smiling because I work extensively with Fitbit.
Oh, okay, cool.
Okay, oops.
But I haven't mentioned them yet.
Well, that's why I know that they do chargeable.
Oh, yes, yes.
And there's a lot of merits to the chargeable system.
But a great example is I got two of them from my parents for Christmas.
Yeah.
You know, after four rechargings,
I know they'll never take it off the charger.
And that only gets me to four weeks.
But here I can get, you know, almost half the year
on one battery.
Well, I just saw my mom over the weekend
and she's still wearing it.
She's happily using it.
And so it was still going from Christmas on the same charge. That makes a
huge difference in usability. And that speaks to the clever
design. They're all using the same sensors. But also
it came down to clever software. How can you do it as efficiently as possible?
That is a big part of
many of my tasks with Fitbit and with other people, is that optimization step that I felt like it was so important 15 years ago because nothing fit on these little microprocessors.
And now, you know, everything fits on these little microprocessors, but optimization is important again because you don't actually want to be using the microprocessors, but optimization is important again. What do you mean? Because you don't want it, you don't actually want to be using the microprocessor.
You mean because I have 512K on this core, I can just fill it up?
Well, yeah.
But you should do it with a lookup table, so you only have to be awake for a second.
A second?
That seems like a long time.
A microsecond.
No, no, no.
So, and I think you have hit on a very important point.
A couple years ago, people showed up and said, we have 8K of memory.
Do you fit?
You know, of course, the answer is, what is the right answer?
Well, of course we do.
Now we have 512K in a lot of designs that are coming down to us.
And they're saying, do you fit?
Well, of course, but we're also saying, but we don't need that.
If our code is efficient enough,
we don't need the space.
We tighten it up.
We're not using MATLAB generated.
Yeah, we write a lot of algorithms in MATLAB,
but that doesn't mean we just port to C from MATLAB
and assume that it'll be the most optimal.
You do have to go in and hand tune
and you have to convert to fixed point.
You have to go for the lowest power solution.
And those just govern everywhere.
Yes.
Fixed point.
No floating point.
Isn't fixed point evil?
No.
Fixed point is wonderful.
It's beautiful.
It's better than floating point.
And, you know,
that's a really interesting difference.
A lot of people come up and say,
well, I've got an M4 here with a floating point unit.
Do you need that?
And I'm like, I only need an M0+.
Any questions?
Yeah.
And they're like, but we paid for this?
I'm like, you didn't need to.
Well-designed code will fit in an M0+,
and consume significantly less memory.
And less power.
Yes, yes.
But you're saying we, and you're saying our code, and we have yet to actually talk about what this code is doing.
Oh, context awareness, sensor code.
There's so much cool stuff happening there.
Okay, so you have FreeMotion.
Yes, yes. That's your library. Yes. So we have stuff happening there. Okay, so you have free motion. Yes, yes.
That's your library.
Yes.
So we have a couple libraries.
One is sensor fusion, and we can go on at length about what sensor fusion is.
There's also something called context awareness.
Oh, not at length, but sensor fusion is when you mash multiple sensors together.
It is.
Like accelerometers and gyros to get a better solution.
Yes. There's really two usros to get a better solution. Yes.
There's really two usages of it in the world.
There's a fairly precise usage that says I'm going to take usually nine axis of information.
And nine axis means.
Magnetometers, accelerometers, and gyroscopes.
Yep.
And you get X, Y, Z measurements on each for a total of nine.
Sometimes you'll bring in a tenth axis with a barometer, which gives you elevation information.
It helps constrain the solution. Or weather, depending on how excited
you are. Well, this is true. But people are usually using it for
floor level detection. Did I walk up steps or did I change
floors? So sensor fusion, as you said, brings in
redundant information to provide a more accurate
solution. It's usually coupled with something called a Kalman filter. And Kalman filters
are really, it's an optimal set of mathematics for removing redundancy.
And some of you listeners may remember that we had Tony Rios from MMSIC on just a few weeks ago
to talk about tuning your Kalman filters.
So they're all experts now.
They'll have it down.
They should all have it down.
That was definitely a show that probably could have used a whiteboard.
We were talking about hand motions.
I do not believe I could explain Kalman filters without a whiteboard.
Which actually brings, I had a question from that show, and MEMSIC doesn't license their Kalman filters, but you're talking about software and sensor fusion.
Do you license your Kalman filter?
Yes, we do.
So we can provide our free motion algorithms, and that includes our Kalman filter, which is an efficient fixed-point Kalman filter.
And that can be licensed for a broad range of cores
or even just to run on Android.
But Kalmans do need to be tuned for their sensor
and for their environment.
Do you do that or do you do a general
and then suggest some...
Well, see, what we found is that there's a great deal
of uniformity within a given sensor type.
So if you take a Bosch accelerometer, an ST accelerometer, or even an InvenSense accelerometer, each of them is fairly uniform within its capability.
And we can code that into our drivers and use that to identify its necessary signature.
And that's sufficient for us for our common filters.
That's cool. But what about where they go into, whether they're going into a car
or a pedometer or into a boat? Do you have to...
You know, we're finding that that's relatively robust with regard
to our implementation. So, provided someone has designed a clean
circuit board, which is not always a good assumption. We once
saw a mobile phone that ran the main VDD lines under
the magnetometer. I don't know what moving electrons have to do with magnetic
fields, but it certainly made for an interesting reading on the
magnetometer. But provided we have a clean circuit board, we're really
finding that the sensors from one solution to the next are fairly
uniform.
The subtlety actually has to do with timing.
Oh, yes. Oh, yes.
Raised eyebrows on that one.
Because when you sample each of these things, if you sample the accelerometer and then you
wait a millisecond and you sample the gyro, they aren't doing the same thing anymore.
They're uncorrelated.
They're uncorrelated. They're uncorrelated. And that really plays havoc with the Kammann filter and all sorts of algorithms. Yes, yes.
It gets very interesting, what would I say, uncorrelated measurements of a correlated
signal. So timing is an
odd thing. For whatever reason, sensors long ago
decided not to report time. We usually run
them in autonomous mode, where we'll say to the accelerometer, make a measurement every 10
milliseconds to give me an interrupt when you have it. Well, we never bothered to require them to
also report their internal measured time. Which means that your latency has to be
fast enough that you get that interrupt and you can time when that data was taken.
Right. And so we should use a non-real-time operating system like Android to go down there.
Yeah, you did.
Which is exactly why we found ourselves in so much trouble with the Android world in the past several years.
That you took an embedded Linux system, you threw Android on top of it, and you asked Android to go
in there and receive the interrupt, 15 milliseconds later, apply timestamps to it, and then call that
a real-time measurement. And that has just caused us on end of difficulty and pain when it comes to these sensor fusion problems.
And I would really say that that dominates a lot of the sensor fusion issues.
Oh, is it?
I mean, is that even possible?
Oh, sure.
Or do you end up with an external CPU that does all the good stuff?
Well, which is where we are.
Apple with much fanfare announced their M7,
which is what we, much fanfare. Where is the software that uses it?
Well, that's a really interesting one because, you know, obviously they built their own core,
they built their M7, and they wrote their own software. But that's really just a sensor hub.
And there's two reasons for it. One, they can turn off their application processor as much as possible.
But the other point is they can now have accurate timing.
They now have a dedicated real-time OS down in the M7, which can apply timestamps almost immediately.
Or at least find a way to apply the timestamp, then hold on to it and do something with it later.
Or fixed latency.
I mean, if it isn't completely fixed,
at least you know how much error you are injecting into the system,
your possibility for error.
And hopefully that's in the nanoseconds.
Yes, we like microseconds.
And certainly we have trouble with the multiple milliseconds we get on Android. Multiple milliseconds.
But, you know, Samsung and others have moved into the same camp.
They have their own sensor hub. The Galaxy S4 had an Atmel core there that provided sensor
hub functionality. And so that was providing a similar capability. So we are seeing the
move over to that world with a sensor hub and in many many platforms okay so you you
have the sensor fusion algorithm and it runs you've set on a multiple well we just wrote it
all in c and so it's really portable to any platform we want to and do you if somebody
comes to you with an idea i i want to apply inertial to my toothbrush to make sure that it's brushing properly.
Do you suggest that they have an operating system or do you suggest bare bones because that's easiest?
That's a really interesting one.
There's certainly many perspectives on the need for an RTOS.
Speaking from our code, our code is designed to work with or without an RTOS.
You can be interrupt-driven, in which case just let the interrupts drive things.
But if you're going to have other tasks going on simultaneously, an RTOS really allows a lot of freedom.
It gives you preemptive, and you can do multiple things at the same time.
Now, if you had somebody that wanted to implement a wearable or something,
a toothbrush. Do I call a toothbrush a wearable? Well,
I'm not sure what to call it. Or that's Internet of Things. That's right.
That would be Internet of Things. Because yes, you absolutely would have to Bluetooth that to something
to make sure your kids were brushing every night.
See, I call Internet of Things wearables for machines.
Yeah.
So, you know, in that case, we would probably want to recommend the open sensor platform, which is something that we have over the years had a lot of people asking and saying, I have my new product.
Can we port your algorithms to it?
And, you know, it would take a month or two to do that port. And eventually that turned out to be limiting ability for our company
to be able to implement new software. So what we did is we said, look, we don't make money
porting our framework. We don't make money writing drivers. We get money writing algorithms.
So we took all the extra part and we just open sourced it. We did a press release last week
with ARM and early next month we're going to put out
on GitHub and we're going to take the whole sensor interface,
basic drivers, and a little bit of algorithm code and we're
going to open source that and just put it on GitHub and anybody who wants to use it can use it.
What kind of algorithm are you looking at? Excuse me? Which algorithms are you going to open source that and just put it on GitHub, and anybody who wants to use it can use it. What kind of algorithm are you looking at?
Excuse me?
Which algorithms are you going to put out there?
That one's still not entirely determined.
I think we're going to have some basic sensor fusion capability.
I'm not entirely clear on what all those will be.
But part of the benefit of the open source community is if people want to write some stuff, they can put it there or whatnot.
But a lot of the infrastructure has really been the painful part. Sensor drivers, if you want to look at a Bosch accelerometer, building up a really good, reliable, fast driver is actually
really tricky. And so that code is going to be up there and we'll have the sensor interface itself.
And so it'll make it easier for a lot of people to bring up systems.
And then you're going to have to tell them how important it is that they not mess up your timing.
Because we just talked about how important timing was.
So putting the drivers out there isn't quite enough.
Well, if you hook it up to an I2C bus, for example, and want to pull off a name your favorite accelerometer, that timing will be driven.
That would be the MMA8245.
Something like that, yeah.
Sorry, I do have a favorite.
I just wanted to make sure everybody knew that I had a favorite accelerometer.
Well, yes, yes, yes.
I'm sorry, I interrupted you.
You were going on.
Well, we could talk about a VMA or the KNX or the LIS.
I have a pile here.
Right now it's the MMA is my favorite.
Well understood.
So, you know, bringing up something like that,
if you tie the interrupt service routine directly to the drivers in that ability to function, I think that's going to handle a lot of your timing quite nicely because it's based on a direct interrupt service response.
And if you want to, there are certainly plenty of features in ARM cores that let you say, timestamp now.
And so depending on how fast you can handle your data,
you can get it timestamped and then deal with it later.
Very true.
And I should mention, we have a close relationship with ARM,
but the open source libraries are being provided as C source.
So they can pretty much be ported to any core that you want to.
That's pretty neat.
So you decided to do this because you are hoping
you can spend more time doing algorithms
and still letting people work inside your framework.
Yes.
I mean, we don't have enough embedded engineers.
We have an awesome team of embedded engineers, but we don't have enough embedded engineers. We have an awesome team of embedded engineers,
but we don't have enough hours in the week
to be able to do all the porting we want to do.
And this allows us to leverage embedded teams
in every different project everywhere.
But it also means that when they come to us,
they can say, okay, we've done this port.
Can we license these algorithms?
We're like, great, here it is. Drop it in. It works.
Boy, that's nice.
The model is much like OpenGL. OpenGL gives you a standard interface for talking to
a graphics library. Anybody can write the algorithms for
that library. It's open.
In our case, we have library implementation code that will
guarantee to work with that interface. If others want to write to it, we wish them the best of
luck, more power to them. But it just lowers the bar so that more people can start using sensors
for more embedded projects and more IoT and more wearable. We want to get more people doing this
stuff. And more people ending up with products that are really good
instead of realizing at the end that at the very beginning
they needed to have truly real-time capability.
And wishing they hadn't shot themselves in the foot,
but having to ship something or their company will go under, which...
You know, and wasting the six months to bring up all that infrastructure to reinvent the wheel,
it wasn't, you know, that shouldn't be the differentiating capability in a product.
You know, make some really cool stuff, a new feature, service the customer well,
and have some good algorithms.
And it's okay with you if people use this platform, the OpenSensor platform,
to write their own common filters, whatever. It's done on an Apache 2
license, so anybody who wants to pull it down... That's just a thank us in your
notes license? I'm going to have to...
I'll let people read that. Definitely, you should consult your lawyers.
Actual mileage may vary. But the Apache 2 license
basically says you can pull it down, you can use it.
If you're going to make changes, you can put them back.
You have to give back to the community.
And so that's a good way to do it.
But it's not sticky like GPL where it will make all of your other code open source.
No, I certainly don't want to be offering or rendering opinions on this,
but I do not believe it has the same stickiness.
Yeah.
Check Wikipedia, at least.
Apache.
Apache licenses are pretty nice.
Yeah, yeah, yeah.
That is, I mean, I'm thrilled with that because there have been times when I've had to put together the framework.
And part of it is just I'm looking forward to seeing it so I can read how you do it,
just to see how it's different when I do it.
Yeah.
It's not like it was a lesser algorithm.
We've spent years battle testing these interfaces, using them in so many products.
So it's the actual interfaces we use.
So it's the best we had to offer people.
And you say they're coming out?
The due date is May 12th.
Okay.
And so we should have a number of, we've got a lot of hardware players that are going to offer endorsements in the meantime.
I've been explaining this to everybody, and everybody's like, wow, this is so cool.
When are we going to get it?
So we've just had a groundswell of support on this.
So we want to make sure it's, you know, the codes,
we're reviewing the documents over and over again.
We've got a team that's combing through the code,
looking for, you know, any possible confusion.
And so, you know.
Slash, slash, fix me, this may be broken.
Yes, do this better next time.
You know, those ones.
Delete those.
Yes, yes, yes.
But, you know, like I said, this is the actual code we're using on projects right now.
And so we're expecting it'll work first time.
Well, I'm excited to see it.
And I hope that it works out for you.
I know that when you open source things, sometimes it ends up biting you a bit because now everybody wants support.
And since it's open source, they don't want to pay for support.
Well, you know, that's interesting.
Because we were wondering, it's like, okay, well, now that we're going to outsource or just open source this,
we're like, is our embedded team going to be sitting here?
We're going to have crickets in the background as nobody needs that anymore.
I was talking to a customer just the other day.
I was like, oh, we're open sourcing all this.
You can just download it.
You set it up.
We'll send you the binary.
There was a silence on the line.
They're like, yeah, we want you to do it anyway.
Exactly.
Like, okay, all right.
Yeah.
Okay, so we have, we've talked about a couple of your products, although one of them you're not selling. I don't know what we're going to call it a product there.
Yeah, yeah, yeah.
What else? So free motion was the other one?
So free motion is actually broken up into four levels, or four products. One is sensor fusion.
Okay, so we've talked about that one. The next is context awareness.
And I'll just list the other two.
So there's power management that allows you to turn, gate certain functionalities in a mobile product based on your activities.
And the fourth one is pedestrian dead reckoning.
And indoor navigation is a whole topic in and of itself.
And therein is also context awareness.
So pedestrian dead reckoning.
I remember we talked to Muvia a bit.
And they do some of that where they're trying to take a position basically from GPS.
And then you go inside, you lose GPS and you integrate your accelerometers twice and your gyros once and you hope your magnetometers aren't too bothered
and now you can tell where you are in the mall. Sounds really easy
doesn't it?
Sure, let's just code that up right now. I mean, I took
physics at one time. It just sounds pretty trivial.
Except that the noise and the integration, it all goes really bad there.
Oh, that noise. Yes, yes. Yes, noise.
It's not to trivialize the algorithm. I shouldn't, because it is really
non-trivial. But is that about where you're headed?
So the core part of that is actually the sensor fusion.
The Kalman filter tells you, the problem with dead reckoning
is that, you're right, if I just knew dynamic acceleration,
I would just integrate twice and I'd be done. Poof.
Poof. I love that. That's great. Poof. The problem is
my accelerometer measures dynamic acceleration in the body frame
and gravity in the world frame.
They're separated by a rotation that I don't know.
So sensor fusion tells me that rotation.
The only reason I need that is to knock off gravity.
Gravity, as it turns out, is a huge acceleration.
You know, the falling through the floor type thing.
So I have to remove that from my integration or it will think that I'm
sinking.
All the time.
Actually,
the negative of acceleration is reported as gravity.
So it actually says you're flying off.
So,
so I have to knock that off and that's really the difficult part.
Now for indoor navigation,
exactly as you described
it, I'm going to take pedestrian dead reckoning has to have a starting point. It only knows a
relative change. So I will need GPS to get into the building or up to the front of the building.
Once I walk inside, the GPS errors will accrue. And so I'll have to use on the pedestrian dead
reckoning. Once I do that, I'm depending
on three things. One, I've got to be able to detect footfalls. I can then detect those.
I can then measure my stride length. Once I know my stride length, and I also thirdly know what
direction I'm going, I can sort of vector some, all of those steps together, one after another,
and I can walk my way through
the shopping mall.
Oh, that's not anything like what I was, I mean, okay, steps, counting steps to figure
out pedestrian dead reckoning instead of using only the sensors.
Well, you know, you are using only the sensors.
What is different is, but the difference is I'm not using dead reckoning.
And if I took dead reckoning, you have already mentioned,
I have to integrate the acceleration twice to get position.
Then I have the gyro, which gives me angular rate.
I have to integrate that once.
So I have three integrals.
Every time I integrate white noise, my variance goes up one more degree.
So I really have an error that's going up as time to the third power. Third power dependence on time
means within seconds, my error diverges beyond usefulness. And this is why dead reckoning is
really, really hard, especially with crummy gyros that drift so fast. Yes. You know,
though I will argue that some of the noise densities in some of them are phenomenal.
You know, I can get five milli-dps per root hertz off the InvenSense one, which is really
pretty phenomenal.
But it's not military grade.
It's not tactical grade.
It's not navigation grade.
Right.
So what we're doing is applying constraints to the solution that allow it to be usable. Pedestrian dead reckoning
requires pedestrian-ing. Pedestrians.
I love it. Let's turn it into a verb. Pedestrian-ing.
So it doesn't work well in a car.
It has trouble if it's on your wrist and you're pushing a shopping
cart. There's constraints where the constraints don't work well.
But if I'm walking my way through a shopping mall trying to get to the shoe store, it's very good, provided you have the right algorithms.
Wow, that's neat.
I haven't really thought about that sort of dead reckoning.
I've always been with vehicles before.
And a related part of that is we can also add another constraint, and that's the magnetometer.
And that's where a lot of people have a lot of trouble.
Magnetometers measure where the Earth's magnetic north field is.
Well, they also measure where the doors are.
Indeed.
And that's what confuses a lot of people.
A lot of people say, well, if I walk by the mailbox, you know, my magnetic heading skews around.
It must be useless.
Well, no.
If you're really clever with your code, it turns out there's information there.
If I'm walking through a shopping mall, say it's got
giant steel girders holding up the roof, there's actually still a surprising amount of information.
See, my attitude for my gyro will drift. I have a single integration going on there,
so my heading error will go up linearly with time. So it will undoubtedly go bad. So if I have something else to hold on to,
some anchor, then I can hold my error and keep it down. Well, it turns out we've been really good
at building magnetometer code that hold onto that, that lets us to rely on the gyro even less.
Yeah. Those magnetometers are really useful for keeping the gyros honest.
You really have to depend on them.
You can't hold on to your attitude over an hour in a shopping mall without it.
Okay, and then you mentioned barometers,
but I'm not seeing barometers as all that useful in the shopping mall,
except maybe going up the escalator to get to the second story.
Well, I mean, you're handing me a softball pitch there, aren't you?
Well, I was actually hoping you were going to, and then you use this other nifty, but
no, I don't think you're going to come up with another sensor here.
Alas, no.
But, you know, well, actually I can come up, we can talk about beacons if you want to,
but barometers measure air pressure.
And it turns out the new ones have, they can measure the air pressure about a foot difference.
So if I raise my hand one foot up, there is actually an air pressure difference between those.
Modern barometers can measure that.
So I actually can count the steps as I go up the escalator or up a stairwell.
And if you want to mess with your misfit, I bet you can go up an elevator and take steps at the same time,
just march in place, and you'll get credit for all of those floors.
Indeed.
Not that I recommend it.
Just go in there and hack it.
Except you want to cheat.
Yes, exactly.
So are you adding auxiliary information with your platform, sensor platforms?
Are you integrating all of this information with the mall map in an Android app, or do you not go that far?
Well, you know, that's a fused location provider, and that's really not going to be our expertise.
You know, we're not going to pull in the map data.
Now, I do think there's a couple of pieces you need to do a full indoor navigation solution.
I count about five pieces.
You're going to have to have some sort of GPS to get to the front door.
You're going to have to have some sort of mapping system for inside the mall. Because if I
don't know that I'm standing in a Starbucks, then it's pretty useless to have my XY coordinates.
I need to know where I'm going or where I am. Thirdly, you'll probably have some sort of
Wi-Fi based positioning. It still provides a reasonable level of 10 to 15 meter accuracy.
And then I think the last two parts are going to be pedestrian dead reckoning and beaconing.
Beaconing seems like it should be able to eliminate all of the others.
You would think so.
I think there's two reasons why not.
One, they're not that accurate.
And two, they're going to be applied sort of ad hoc. So for example,
if I'm walking down the hallway of a shopping mall, I'm not going to have beacons enough
throughout the hallway of the main mall. Oh, you don't think each store will have a beacon?
They will, but there won't be enough to give me the resolution. So pedestrian dead reckoning can actually follow my steps
as I walk down the main hallway,
turn right around the fountain,
and then turn into my Starbucks.
But I will have accumulated some level of drift.
Pedestrian dead reckoning is really good,
but the error will accumulate.
At the moment that I walk up to the cash register of the Starbucks,
the beacon at the cash register, and there'll probably be one in most stores, you'll probably have more in others,
but just take the example of one beacon at the cash register. As I go up to that, I can now
effectively correct my error by being close to that beacon. So I think the whole system is going
to have to work in concert to sort of hold everything together.
See, I'm not sure you need GPS if you can beacon the doors individually as you come in.
Certainly, that's a very good way to do it.
I think I'll still end up having GPS when I'm outdoors, though.
It's gotten so easy.
I know, it sounds so easy.
Just these satellites flying around 22,000 kilometers over my head.
Once again, I'm going to say, those signals come from space.
Isn't that cool?
That's why I could wear an aluminum hat on my head, just in case.
So, which of these run on a small processor and which of these run in Android?
Well, you know, you can do both.
It's the only difference is power.
Android's a power hog.
Well, the application processor is a power hog and Android's heavy enough that it needs that level of power.
So you can run all the algorithms we've spoken of. I mean, some GPS
algs do run in the application processor. They only use a tracker in the GPS chip, and then the
nav algs run in the application processor. So everything can run there, but we're seeing a move
towards autonomous background processing, the so-called sensor hub, where the AP, the
application processor, gets turned off all the time. And I have other cores that run these functions
in the background at low power. So the sensor hub is really sort of finding itself in that role
where it becomes the all-reason capability. Again, the same Alex can run one place or the other.
It's just a question of which is more efficient
if I'm going to run it all the time.
And if you are only running your navigation,
your proper, I mean, you're collecting data
on a sensor hub, a small processor,
and then you're only going to run your navigation
when they turn on the screen,
well, then your navigation algorithms can run
on the same processor that controls the screen
because it has to be on anyway.
With GPS, which is pretty much can be stateless,
that is, I can just turn on GPS,
and yeah, there's a time to first fix,
but assuming it's a warm boot, a warm fix,
I can pull that up pretty much instantly.
I pull up my phone, open the maps, and poof, I have my GPS.
But PDR is stateful.
I have to integrate or sum up my position over time.
So I can't just run them when I turn the phone on.
I really have to be able to track that over time.
Okay. So your algorithms probably, given my screen, non-screen version, you'd probably run them in the smaller processor.
There's a lot of merits to that.
So you say sensor fusion, and you say it as though it is a separate processor.
But Android has a module subsystem that is about sensor fusion.
Indeed.
It comes with sort of a basic capability.
They've had that out since Gingerbread, I think.
Yeah, I remember Jen Castillo, who's a regular guest,
has given several talks about that.
So that's where all of my knowledge about Android sensor fusion comes from.
And so it provided that capability.
We found it very easy to break it.
And so we found we ended up, we had to write our own.
I often talk about sort of two generations of sensor fusion.
There's the sensor fusion we had in Gingerbread, where we only needed to play games.
If I'm playing Grand Theft Auto on my iPad or on my tablet,
that's good enough.
But now that I'm doing navigation,
navigation-grade sensor fusion is a whole different beast.
It looks similar, but it has requirements on it that are so extreme,
it really requires almost a rewrite of that capability. So I call that sort of next-generation
sensor fusion or navigation-grade sensor fusion. Do you think Android is working on it?
I'm not sure they see the need for it. They have a lot of vendors that are providing great capabilities. InvenSense, for example, provides a lot of these capabilities.
And ST provides them as well.
Bosch provides their own capabilities.
And of course, Sensor Platforms provides that as well.
So I think they're seeing enough diversity of solutions that I think they don't see the need to do that.
So there was one more.
I had fingers and I was counting them.
There was power management.
We kind of touched on power management.
Certainly.
But you mentioned context awareness as being above sensor fusion.
Or maybe you just listed them and that was where it went on my finger.
Yeah, I did.
They're parallel.
Sensor fusion requires a lot more computation and also requires more sensors.
Context awareness could use all of those sensors.
There's really no reason not to.
There's more information from them.
The problem is power.
You know, I have the wearable on my wrist which you know tracks my sleep counts my steps you know
knows my activity through the day i don't need some of those sensors for that this uses an
accelerometer only and it turns out using just the right sensor an accelerometer i can divine
if you will i can determine a lot of your activities your context throughout the day
and that whole class of context awareness is turning out to be really fascinating.
You know, what's the old model for my phone was,
I would pull it out of my pocket and play a game.
I turned it on to play the game.
And when I was done, I went back to the home screen,
the game went away and the sensors went off.
But now we're going to this always on world where I want to have the sensors on all the time identifying,
well, when I pull it out, why doesn't it know what I'm doing?
It should know I'm standing in line at the Starbucks.
Why doesn't it?
Why doesn't it know when I sit down in my car?
I guess if my phone knew when I was in Starbucks, it would
pull up the Twitter app because that's what I do. But I walked into my Starbucks this morning
and it pulled up a little green banner and said, you know, welcome to your Starbucks.
And I just swiped it and there was my pay code, my barcode. I already find that really convenient.
So when I sit down in my car, why doesn't it know?
Oh, it's afternoon.
Actually, it's 2 o'clock in the morning.
You're probably going home now.
I work at a startup company.
Sorry, my hours are a little extreme.
So why doesn't my phone understand my world?
And what we're finding is the OEMs are saying,
if I can make a device almost a part of your life, it becomes more personal, it becomes more a part of you.
There's more stickiness to the product.
This is like when Siri tells me, or maybe it's some iPhone thing, but it will tell me how far I am from home.
Exactly.
Google Now does the same thing, where it understands the state or the context.
You have a home and identifies it and says, well, if you're pulling this up, maybe you
want to go there.
Oh, okay, it's 24 minutes.
Yeah.
And so is that what you mean by context aware?
I feel like we didn't quite define.
So context awareness is allowing a device to understand what you're doing. Now, whether that may be whether I'm walking, it may be whether or not I'm sleeping, it may be whether I'm driving in a car, standing in a coffee shop, shopping for jeans, understanding what it is I'm doing now, and then providing me maybe something that I need. I can tell, I can understand how if I told my device all of this, it could be used to improve my device's power management.
I totally get that.
But that does not help my life.
What you're talking about is the device figuring out its context.
Yes.
Because that would be the only way that this really happens.
Yeah, I think in a couple years we're going to find it odd that I have to tell my phone some of the things that I'm doing.
You know, walking into a store.
Say I go into my favorite bookstore.
It should understand that I'm in the bookstore.
Why do I have to tell it? Why go through five or six clicks to be able to pull up, say, the Barnes & Noble search app?
I'm looking for my list.
Maybe Neil Stevenson came out with a new book.
I'm always hopeful.
Why should I have to go and hunt down someone to do a search for me?
I should be able to pull up my phone and it knows that I'm browsing and to be able to say, oh, here's
the browse page. You can start looking up the book that you want.
Or I'm walking towards the parking lot. The chances
I want to text my husband and find my car are both very high.
Exactly. Exactly. Because when you do exit the mall
and you start walking through a known parking lot and you left your car there, all of that is pretty easily computable.
But this brings in some pretty, I want to say creepy.
Oh, yes. I mean, Robert Scoble loves the creepy line. You know, when do we cross over that?
You know, and there's certainly concern to be had about that.
But, you know, it should be said when the telephone first arrived in people's houses, could you imagine having a phone in your house in the days of Edison?
Or the intrusion.
Oh, the intrusion.
How could somebody intrude like that?
People were living.
Oh, yeah.
We've grown comfortable with it.
I mean, I carry this phone around me wherever I go. Isn't that odd?
Well, odd is relative, I guess.
So, how are you dealing with the
privacy? From our perspective,
I don't think context should be computed in the cloud.
I know cloud is really popular and really hot, and there's amazing things that can be done with it.
But I think that keeping it private to my device, my device does have all my phone numbers.
It has all my email.
It has all of my text messages.
I'm comfortable with that.
So as long as all of this is computed locally and kept local, then I think I'm going to be comfortable with that. So as long as all of this is computed locally and it's kept local, then I think I'm
going to be comfortable with that. And then I can choose whether or not it leaves. So I think local
and choice allow it to keep it just the side of the creepy line. And that requires more processing
in the devices then? Well, you know, that gets back to our sensor hub conversation. If I've got a good sensor hub, I've got really good code
for it, then I think I can do it with almost
actually we've demonstrated it, you can do it with almost no
power hit with a well-designed software.
Back to the segues.
They had a context aware module where they, they tried to determine your intention.
Are you doing some of that as well?
Well, intention is a very interesting question.
If I understand a chain of context, I can often start to extrapolate into your intent.
It's still a Bayesian.'s still a Bayesian probability.
It's a conditional probability.
And there's no guarantees on it.
But I can often start to anticipate some of the things you need.
Certainly there's things that are okay, and there's certain things that aren't.
But I think, like you said, you walk out to the shopping mall parking lot.
When I sit down in my car at the end of the day, there's no reason for it not to have already identified I'm dropping down, sitting into my car, and identify, well, it's 34 minutes for you to drive home.
And, oh, by the way, you need to pick up milk at the store.
And, oh, by the way, I just sent a text message, let your wife know that you should be home at this time.
I'm going to do all those anyway.
Yeah.
And that is what I wanted to do.
And yet I'm still slightly creeped out.
Because my device shouldn't be able to make these decisions for me.
Of course, I say that.
Still part of me from that engineering that
says, no, you can't do
this. And the part of me that's
like, no, I will never carry a cell phone.
Christopher, you're just going to have to learn to
accept that I'm out of pocket sometimes.
And now I go nowhere without my cell phone.
Not because of him, but because of the last time
I didn't have my cell phone
and I needed to wait for five minutes.
I couldn't figure out what to do with myself.
I mean, we're definitely seeing society drift.
I mean, if you ask my parents about keeping a cell phone with them, they find it odd.
My generation says, of course.
Then we go to, you know, the younger generation of my kids and whatnot.
You know, they're still not even teenagers yet.
But the teenage generation, Facebook and whatnot, the concepts of privacy and whatnot have drifted.
And I assume they will continue to drift.
Yes.
And our robot overlords will be here soon.
Bow down and be thankful.
Yeah, it seems like they're going to make a pretty good life for us.
Hopefully they like us.
That's about it for the show.
Do you have any last thoughts you'd like to leave us with?
No, thank you. It's been a pleasure.
It has been wonderful having you. Thank you.
Thank you very much.
My guest has been Dr. Kevin Shaw, Chief Technology Officer at Sensor Platforms.
If you have questions for Kevin, there is a contact button on sensorplatforms.com,
and there'll be a link in the show notes.
If you have questions or comments for me or want me to pass something along to Kevin,
there's a contact link at embedded.fm or email us, show at embedded.fm. Thank you for listening,
and thanks to Christopher for his wonderful production, making sure we sound good and
that I occasionally stay on topic. Although I did buy
new gears, so tell us if you hear anything. Ah, and I was actually, oh look, there was a note here.
I was hoping to work grasshoppers into the show so this final thought would flow instead of being
completely random. But plans and reality not meshing once again. That is my life. So I heard
a joke last week. A grasshopper hops into a bar.
The bartender says,
Hey, we have a drink named after you.
The grasshopper replies,
You have a drink named Kevin?
I feel honored.