Embedded - 27: You Are Blowing My Mind
Episode Date: November 13, 2013From the MEMS Industry Group Executive Congress: Ivo Stivoric, co-founder of the Body Media which was purchased by Jawbone CEO Sam Guilaume and Dave Rothenberg of Movea Stephen Walsh, ISKN – iSket...chnote, one of the pitches in the MEMS Elevator Pitch Session From the 2013 IEEE Global Humanitarian Technology Conference: David Peter works with New Life International. His paper was “A Simple Algorithm for Chlorine Concentration Control”
Transcript
Discussion (0)
Hello, I'm Elysia White, and this is Making Embedded Systems, the show for people who
love gadgets.
Today, we're going to talk about fitness through wearables, indoor navigation, an iPad accessory
that lets you draw with a pen and paper, and the chemistry of chlorination, all in one
show.
Before we get started, let me say how happy I am about how
things are going on this podcast. We've been doing this for six months. Even though we started out
planning just a half dozen shows, we kept going, getting to talk to lots of people, learn new
things, and get excited about all the amazing things people are building. I've also been thrilled
to hear from so many of you. We'll be doing a show
soon where we go through our mailbag, answering questions and addressing comments, sharing the
new information you've shared with us. Hit the contact link on embedded.fm if you want to
contribute to that show. So now, I promised a lot in that intro. We're trying a segmented show.
I've been to two conferences in the last few weeks, and I got to talk to other attendees and presenters while I was there, on air.
Of course, recording in an uncontrolled environment is always, shall we say, exciting.
Please forgive the pops, the planes, the voices, the phones, and the general surrounding noise.
And Thomas Decker, please forgive that I totally deleted your segment.
Those Pico hydroelectric generators were neat. In the meantime, I hope you enjoy hearing from these folks as much as I did.
First up is John Ivo Stavoric. We were at the MEMS Industry Group Executive Congress in Napa Valley,
sitting in an outdoor fire pit, watching the planes go over the surrounding winery. I'm happy to join Ivo Stavorik,
one of the founders of Body Media, and now part of Jawbone. Welcome to the show.
Thanks. Thanks for having me.
I think my first question for you is, what are you wearing on your wrists?
I have a watch. I have a couple more jewelry bands, but one's more leather, one's just new materials.
And then this is actually a body monitor, which is tracking my lifestyle every day.
But whose body monitor?
I would recognize the Fitbit Flex because I have one of those.
Right.
This is Jawbone.
Okay.
So it's the Jawbone version.
Yeah.
I also am wearing on my upper arm an armband that is a body media armband that uh i've been wearing for
different versions of for the last 14 years because i was a founder of body media we just
merged with job on so um i'm getting into the mode of wearing many different products i'm trying to
see what the different experiences are like and yeah with wearables applications it works for
best i end up with pockets full of things and all sorts of stuff on my wrist so that I can compare them all later.
Yeah, yeah.
So I'm doing the same thing, but more just trying to see different.
Some applications, certain kind of form factors or locations of the body work differently and better.
And sometimes it depends on the situation.
Sometimes it depends on what you're trying to
sense and what the meaningful information would be for that consumer in that event.
So I think there's, I think that's some of the things going on is like, you know, that's
the wearable space is, it's really important to be wearable, we think, because that way it gets
you a window into what the person's day and life is like.
And so you can use that information for health purposes, but you can also use that to inform the context of their life.
And then use that contextual information to kind of help the world and environment around them to be more proactive or more interesting or meaningful to them.
And how it interacts with other Internet of Things devices, too.
Like, you know, if you talk to something, it's pretty much dumb until it understands who you are.
And usually the way you have to tell it who you are
is you have to get on a screen and use your pointer or finger
and do all these setups.
But if it actually understood that, hey, I just fell asleep,
turn down the temperature, you know, or I'm in a meeting, don't ring me.
It's a lot more meaningful of an environment. It's a lot more meaningful of an environment.
It's a lot more meaningful of an engagement and interaction and experience with these devices,
and they become more powerful that way in your lifestyle.
So we're doing a lot of explorations on the wearable side.
And it's way beyond pedometers.
Oh, yes. Oh, yes.
I mean, body media has always been a multi-sensor house
from its origination and founding.
And we've learned a lot of what you can do with very low-level sensors,
but when you combine them and fuse that information together.
So as we've come into the Jawbone family, that multisensor idea is very strong.
It's deep-rooted in the job on culture also i mean they started with um you know earbuds that were they were multi-sensor in nature to take out the sounds
of the environment and make your sounds sound clearer to the people around you and to yourself
um in a you know with something that a cell phone was not so good at and they took the friction out
of that experience and uh you know we were trying to take the friction out of not knowing anything
about my human body how are you going to know the best about it? You had, the only way you could do that
was, in our opinion, was through a multi-sensor perspective. If you did it with a heart rate
monitor, it could be nervous because I'm in an interview with you. It could be because I'm
weightlifting. It could be because I'm walking. It has no way to disambiguate that signal. And
from a multi-sensor perspective, you could actually start to understand both the
context, but also make measures of the body that are a lot more accurate because you had these
multiple inputs. And then you're using computer science and machine learning to make these
algorithms really be quite robust. So what are the sensors on your body media armband?
So on the body media armband, we've had three axis motion uh and then we've had accelerometer yes
and uh skin temperature governing skin response uh heat flux rate of heat coming off the body
and um we constantly are looking at other ways like i mentioned a heart rate monitor it's very
interesting i think when you add that in especially you have the other sensors you have a lot of
context of the individual um You can actually start to understand
things like while they're sleeping, while they're resting, while they're active, what are these
kind of engagements doing to their, you know, how healthy are they, but how stressed are they?
What are the depths of sleep now? You can have all this other detailed information that would
add to the accuracy of our system, but also because we've been pretty accurate to date,
but we just think it'll take it to the next level of precision and, um, and allow
us to add more features in, uh, that consumers are asking for. I mean, we were always doing things
like calories and steps and, you know, stuff like that for weight loss. We added sleep in a long
time ago because that was really important from a weight loss perspective. If I got three hours
of sleep, I don't feel like working
out in the morning. I'm going to supplement my energy by eating more. Um, so it was part of the
balance kind of lifestyle balance dashboard for you to understand more about your life.
What we learned was as we started doing that, people were really intrigued by the sleep thing
and they get really intrigued. Yeah. And it's more and more, and we've seen it also in our,
in our, as we've merged with Jawbone,
we've seen this broader consumer set really starting to be quite vocal,
both in the way they use the products, so we see it in the data,
but we also hear it from them that we need to do more there for them.
They're really excited to learn more and kind of,
it impacts so many things in your life.
I said weight loss, but it also impacts my mood, my stress, the energy I have, how I get, how I get active in my lifestyle, um, relationships I have. So I think,
um, I think that's an interesting learning and we're starting to learn a couple of things around
stress in people's life, which is obviously related to activity and, and, and, uh, sleep and
food and so forth too. So, so I think these are having this data and having this large database
of people who are working with us, uh, and working with us and interested in engaging in these products that we've invented for people teaches us new product opportunities, new areas and applications to focus in on, and also new sensors to bring to the marketplace that would augment the kind of foundation that we've set so far and the algorithms we've set so far so that we can deliver better on those applications.
I heard here at the conference someone wanting to put in a humidity sensor into a smartphone.
And I was boggled.
I mean, why?
Why would you do that?
Right.
Well, the other thing I heard was that humidity and allergies correlate.
Yeah.
And that's OK.
If I knew that going outside would mean 45 minutes of sneezing, I would get on the indoor exercise bike.
Right.
And it would be so much better.
Yeah.
Are you looking at that sort of sensor too?
Yeah. So I think when I say multi-sensor too, it's not just multi-sensor on body, right?
So we as Jabo have been building applications on smartphones and even the internet for all of our products. So what's interesting is
that when we say multi-sensor, we don't think of the world as just the wearable side, albeit that's
a very important catalyst and also a doorway into some things, like I mentioned context earlier.
I think some of the other things going on there is, you know, we have this cell phone. A lot of times it might be, you know,
on the table or in my bag, but it does know GPS. It probably has a sense of my schedule,
and it probably has other sensors on it that with these APIs that are being developed by those
manufacturers, including us, we can merge these data streams together in very unique ways and
ways that weren't common or weren't possible before. And then there's other products, too, that have their own APIs, right?
Thermostats have APIs.
We have, you know, sound boxes that have APIs.
We have, you know, they're playing music for people.
We have car systems that are going to have APIs soon.
So as you have this coming together, you have these ways that there could be this sharing and the multi-sensor environment becomes much more rich
and the context of that individual can become much more rich. Granted, you need some serious muscle
on the sensor fusion side and try to make meaningfulness from all these data points
because it's just data until you can pull it together in a way that is meaningful it becomes information becomes actionable and i think that's the muscle that um jawbone and uh and body media
bring together we've been working in multi-sensor space for a long time and uh it's not a new thing
for us and um we're just we're we're happy to bring a bunch of worlds together that we can
actually kind of kind of ladder up up where one plus one equals three.
And since we have a history of doing this, we feel it's a competitive advantage.
Not to mention that we have a very broad user base, you know, with all our products,
where we have millions of people who use our products.
And we feel like, you know, design and experience is really important.
And if you get that right, you get a chance to get engagement. And if you get the engagement right, you get a chance to have just much more,
much richer data set than anyone's ever seen before.
I totally agree that software is where you can build amazing castles from the data you have.
Are you personally one of the quantified self geeks?
You know who I'm talking about yeah so i really i really love that
movement um but i have to be honest um yeah why we did body media and why we do jawbone stuff and
why we do up in the in that line is not exactly for the quantified self-er. It's for people who really want to do something proactive
in changing their life or stay active
or maintain a certain lifestyle.
And that interest, sometimes numbers are very compelling
and motivating to people.
Sometimes you have to visualize it differently,
but there is a group, yes, that is motivated by it. But I wouldn't call them quantified selfers. I think they have serious
interest in doing something different in their life and they don't have these dashboards. And
without this dashboard, you can't make effectual change. So what I've seen, so we started off as
a business in body media to do dashboards for the human body. It's a starting point for behavior
modification, but you have to start somewhere and we had to build the hardware, build the software, build the analytics,
put something in front of people.
But what we learned was what people were coming to us for,
for example, one of our first markets that we focused on was weight loss,
is that they were trying everything,
but they didn't understand why certain things they were trying
were or weren't working for them.
It's a feedback loop.
And if you don't have the information to feed back,
it's not a loop and it doesn't make any sense.
Right.
And I think, so we, with these letters back from people saying,
oh my God, I lost 80 pounds, I lost 20 pounds for the first time in my life.
I understand what I was doing that was impacting positive outcomes versus negatively for me.
And that dashboard became so important, and it became actionable.
And then we started adding other feedback loops on top of it,
other content that was more proactive.
It wasn't just a data feedback.
But even the data by itself was motivational and was part of that feedback.
So we learned a lot about behavior modification in building these systems.
But a lot of times that's not, when you say quantified self, I still believe that a lot of things going on are not about,
when people are trying to lose weight, there's like a community and a social side to it and so forth.
It's not all about self.
And it's not just a single single view of the world and I think it actually what's more
interesting is when you bring this data together from multiple people and you start seeing how
populations or some people call them tribes or you know subsets of these populations are doing
things I mean what's interesting is like I can compare to myself and what I've done successfully
in the past but it's also interesting that other people who have dealt with this challenge what
they did and how they found success in their life. If I'm thinking about
some health challenges or something like that. But I can also see it as in many different other
applications where you start, if you see what other people are up to, how you might pattern
towards that. If I want to train for a half marathon, people like me who started at my level,
like what was the best, what got them to most success? And just instead of reading it in a book, I see in a real life way what has really worked
out there in noisy, crazy, free living environment. But what really worked in people's lives who are
like human, just like me, and deal with ups and downs just like me and where they found successes,
how they broke through plateaus, how they increased their performance times, whatever
that might be.
And so it isn't about communication.
It's actually about learning through other people's experience.
Yeah, but I think... I'm sorry.
It's not about competition.
Yeah.
It's about learning through other people's experience.
That's correct.
That's correct.
But I mean, some people...
Competition's effective too.
Some people may take competitive angles at that.
Some people would...
I know in the weight loss community, a lot of people use
it, the word accountability, that sharing and communication becomes an accountability
factor that if I'm putting it out there with other people, that I'm making a promise to
not just myself, but to them.
And I think it is a medium of communication to self, these feedback loops, but it's also
a medium of communication that allows me to talk to my trainer
or my counselor or my...
Your doctor.
Right, about how my lifestyle,
how it's affecting me.
And right away, a lot of times,
those conversations are verbal and subjective.
When you put a little bit more
of this more objective information
or quantified self-information in front of people,
the conversation shifts.
And instead of trying to,
and also trying to, instead of trying to, and also trying
to, instead of trying to be an archaeologist in mind what the people have done for the last week,
it's already there. Now I can talk about next week because I don't spend as much time uncovering the
past. Now I'm talking about how we proactively change that pattern to a positive in the future.
And we've seen in intervention studies and clinical trials, people who use it every day in clinical practice, that that way of working just makes the whole intervention much more efficient and much more exciting.
Because now we're not – I don't feel guilty when you ask me the question, how many times a day did you work out last week?
You know, and you say, well, three times, of course.
But I know that that's a lie, right?
Or potentially it wasn't exactly accurate.
But now I have accurate information.
I don't have to worry about what, I might be concerned about those results, but now
we're talking about how we're going to make it better.
And we set goals and we can move forward.
The behavior tweaks that lets you, you get better a little, one week at a time.
Yeah, yeah.
Those are, those are the ones that tend to stick, at least for me.
Yeah.
And it's, it's, it's true for a lot of people. And, uh, we have these big goals,
but you got to step into those goals and piece by piece and it takes practice and it takes,
sometimes it takes some sharing and being more communicative with these things and just doing
all by yourself. We are social creatures. Yeah. But sometimes, you know, people want to do it
themselves and they're afraid to go into the social group, right, to do it.
And so I think that's where the technology is adaptable this way.
These products adapt to your needs.
So they're a little agnostic in a sense.
If you want to do self-care and really self, by yourself,
if you want to socialize it over to kind of more of a social network type of way,
or if you want to be in group sessions,
or you want to do it at a professional level where you have an interventionist,
a doctor, a coach, a counselor working with you.
These are all, it adapts to those kinds of situations. And I think that's what we learned
was really powerful is that to have something that could actually fit into these different needs
as a platform and then move and scale into sleep and
co-opted by people who have diabetes and co-opted into situations
where people are using it to do game playing and so forth.
I mean, it's interesting how this, when you put the embedded system together, when you put the algorithms together,
when you put the software together, how it can be utilized as a platform in so many different ways than maybe like you just started with.
And I think that's where Quantified Self Community and the developer community is really interesting
because they're co-opting this information to brand new ways and actually teaching the world, teaching companies
like us, teaching other people out there what is possible. And so I wouldn't call myself to go back
to your question like necessarily one of those geeks, but I am a geek. I don't know. You have a lot of symptoms. I am a real believer in multi-sensor, sensor fusion,
and what this data can do.
I believe very strongly that,
but I don't believe in the one-off idea.
Like doing a clinical trial of 100 people
has probably more noise in it
than if you have a million people out in the environment.
It's very noisy out there. But just look at a million and look at what the real trends are out there across all those people where you've been monitoring them versus 100
people who are doing stuff in a lab and so forth. Very different results. And I imagine that the
latter is a lot more statistically significant than what that trial supported with 100 people,
even though it was
done in a certain kind of what people might suggest as an evidential medicine slash, you know,
strong cohort controlled scenario. The world is messy. The world is noisy. And I don't think,
I think healthcare keeps missing the boat. And a lot of these behaviors and behavior modification
programs and opportunities around health are missed by not looking at its scale.
And over the years that we started, over 15 years, we haven't always had the opportunity to look at it in the scale, although we've had a large scale.
Now, as Jawbone, our scale is unprecedented.
And to see that kind of data coming in and the stories that can be developed from that at an individual level but also at a population level and across the United States, across the globe, is just going to be brand new
understanding that we've never seen before. We'll learn new applications. We'll develop new products
based on it. But I think we'll learn a lot on what's really happening out there and how to help
people and what is working, what's not. And a level that I think will challenge the way
healthcare is done at some point too.'m just but you know we have some
time yet to kind of figure that out so speaking of new products uh body media was purchased by
jawbone last april well we announced we it was it's so we're fully owned subsidiary of jawbone
so we had we had a merger um we are uh we announced in end of of April shortly after we closed. And so that's definitely a 2013 event.
And so what are we looking for for products in 2014?
Yeah, so we're very careful on how we talk about...
You're not going to tell me, are you?
No, but I think you're starting to see already.
Look, I mentioned earlier, we have a great passion for multisensor.
We have a great passion for wearables.
We have a broad product line that goes
beyond wearables
and in all of those products we have sensors
and I think what you'll start to see is
just from us
things will get smaller, things will get cooler
than they have been in the past even though people
credit us for being pretty good
so far I don't think you've seen anything yet
I think on top of it I think the interactions and the experiences even though people credit us for being pretty good so far. I don't think you've seen anything yet.
I think on top of it, I think the interactions and the experiences we'll have between our products as their own products,
but also in between them will be quite interesting.
And we're working on what the right experiences are.
So too early to tell you exactly what they are,
but as we're testing things out with consumers and testing things out in the lab,
there's a lot of thumbs up and looking at each other with a little wink in their eye,
thinking like, yeah, this could be kind of fun.
So we're trying to bring some humor into things, I think.
I think we're trying to bring some new technologies on board.
We're trying to make sure that the Internet of Things is not just an unmeaningful idea.
And we think context with wearables, it becomes one of these keys to the doorway to make the
Internet of Things quite meaningful in the interactions that we can enable.
So that context, again, developed by multisensor.
But if we understand the people that are wearing our stuff, both as individuals and as communities
of people and so forth,
we think it can get pretty powerful in what we can do next.
Well, I'm looking forward to seeing what it is.
Yeah, so am I.
So am I.
We're very excited.
We're very excited.
Thank you for taking the time to talk to me.
Thanks for having us.
Thanks for your interest.
Moving from application to algorithm,
the next people we're going to hear from are Sam Guillaume and Dave Rothenberg from Movia.
Wait, before you hit your email to correct my pronunciation there, listen for Dave, the CEO of the company, to say it.
I'm excited to talk to Sam from Movia.
Sam, tell me about yourself.
Well, so I just relocated here, actually, in the Bay Area, so I'm very excited to be a CEO of a
startup company in the Bay Area. It looks to me like a wonderland for CEOs, I guess. It's just
amazing the amount of connections you can get and the exciting people you can meet on a daily basis.
So I'm very excited to be back here. I'm saying back because I used to live here actually many years ago. And it's extremely exciting again to run a company, a startup
company in the MEMS environment for mobile applications while being actually in the Bay
Area. So that's really exciting times for me.
So I know of Movea from when I was looking at gesture recognition. I know you have a
lot of algorithms there,
but you just announced an interesting new algorithm.
So we are a six-year-old company,
and we started out of Europe
with an ambition to build exactly what we do today, right?
And the ambition was, we called it MotionIC six years ago.
And MotionIC was, the concept was to have a device dedicated to processing sensory inputs.
Six years ago when we started, there was no such a thing in smartphones. We were not talking about
sensors in smartphone. However, we had opportunities to have consumer
electronics goods
using sensors, but specifically
mice. And we had
this product we call AirMouse,
which is essentially
a mouse with
gyroscopes and
accelerometers embedded
that can capture the gesture of a user.
So you have this Nintendo-like type of effect.
I'm referring to Nintendo because they use exactly the same technology, right?
And you have actually this effect of the ability to capture the gesture of a user
and moving a cursor, moving a joystick, or moving something on a screen.
Six years later, you have all the sensors in smartphones,
and now, all of a sudden, you can deliver a very similar experience to the user.
But much more, right?
So you can do gesture analysis, you can do gesture capture on your smartphone.
You can have gesture-enabled features,
so you can increase the volume by moving your smartphone up and down.
But more importantly, the sensors in your smartphone
can deliver information about your activity,
about your context, about your location.
So we just announced, actually,
an application where using the sensors in the smartphone can allow you to know
you're wandering into a building inside a house, right?
By just analyzing the gesture and the motion of your body
and understanding your heading, your pace,
and really understanding how you evolve into an environment.
So this is like I go into a mall, and it knows where I am.
And it's got to be the accelerometers and the gyroscopes,
the magnetometers, so a whole inertial measurement unit.
And you probably used the GPS position I started with
before I went into the building.
So, yes, absolutely.
We refer to what we deliver to dead-use reckoning, right?
So we don't know where we start from,
but from the starting point,
you know actually the evolution of the displacement, right?
But dead reckoning is really hard.
You have to integrate the accelerometers twice, and that leads to all sorts of errors.
And even integrating the gyros once, dead reckoning is hard.
It is hard, and you have very low-grade sensors.
They are consumer electronic grade sensors, right?
So on top of the difficulty you just mentioned,
you have also the inherent drift
from the sensors. Especially the gyros.
Especially the gyros. You have sensors
like the magnetometers who are also exposed to
magnetic perturbations, the beams that you find
in a structure. So it's a very complex exercise.
Even the automatic doors as I go in or out
are going to be magnetically difficult to identify.
Think about trying to do deduce reckoning
in a subway station.
You have magnetic perturbation, which is just a nightmare.
And that's not enough,
because what you really want is the motion of a body,
not the motion of a smartphone. So the motion of the smartphone, right?
So the smartphone is going to be in your hand.
You're going to wave your hand like this,
and then after you're going to receive a call,
you're going to bring your smartphone to your ear.
And once you're done with the call,
you're going to put the smartphone in your back pocket.
What matters is the motion of the body,
not the motion of the smartphone.
So with all the complexity you mentioned,
there is added complexity that, you know,
the smartphone is not on your torso, right?
It moves actually all around yourself, right?
It's an independent body.
It is. So it's a very complex matter.
Yet, actually, this can be resolved, right,
with a lot of computation.
A lot of computation. That's putting it mildly.
And you want to do this at a very low price, power-wise.
So you want to do all this fancy processing
while maintaining actually a long battery life of the device.
The long battery life, what are you doing toward that end?
I mean, you make software, right?
So we do make software, and really the equation we're trying to solve
is maintaining this high level of performance at a very low power cost.
And one way to do this is to tweak your algorithms
so that they can operate and run on a pretty cheap microcontroller.
Cheap, again, I refer to low cost in terms of silicon size, memory size, and power, right?
So that's essentially how you want to resolve this, right?
So you want these fancy algorithms to run on the very basic microcontroller,
as basic as you can find, right?
So we have today announced some partnership with companies who are providing 32-bit microcontrollers
with Cortex-M3, Cortex-M4, with the PIC32.
So we have a couple of architectures that we can operate on.
All of them actually have this very low price tag, again, when it comes to power.
So we have demonstrated recently sub-5 milliwatt performance, still delivering indoor location,
activity monitoring, gesture analysis, and all the sensor management that has to do with
compensating for the drift, compensating for the calibration, and so on and so forth.
So that's a less than 5 milliwatt price tag.
That's what we just achieved here.
And do you specify the sensors?
I mean, you don't make the hardware, do you?
So that would be too easy to some extent, right?
Because specifying the sensor would be actually tweaking the algorithm for a specific
set of sensors. Well for a specific bandwidth and noise characteristic of the sensor that makes the
Kalman a little easier to build. Absolutely it does but you know your customers they want actually
to have a flexibility of choosing whatever sensor they want, right? So typically, the platforms we are aiming at
have many, many combinations of sensors,
different gyros, different axles, different magnetometers, and so on, right?
So you really want to deliver a solution
which is as agnostic as it can be, right?
And this adds another layer of complexity, obviously, right?
Because the sensor management layer I was referring to
has to work with any type of sensor,
any type of characteristics,
and you have to have actually the specific drivers
to some extent to address actually these families of sensors.
So it's a complex matter.
Yeah.
Well, that's really interesting.
So Dave is here as well, and I would like to talk to him for a few minutes.
Sure.
So Dave, Sam was telling me that 5 milliwatts, but off air, I heard less than 1 milliwatt. How are you going to get there?
That's a very good question. We can get there through a partial hardening of the logic.
Hardening? Yes. What does that mean? That means taking some of the software code
and moving it into gates, physical gates or transistors on the chip.
So we're just going to put the algorithm in programmable logic and it'll be in ASIC and wow.
But that's not so easy.
It's not.
It's a difficult job.
It takes the right people to do it.
And some pieces of it would not even be in programmable logic.
They'd be in hard logic.
And that's where you'd get the most benefit, most power benefit and most performance benefit.
The trick is to identify which pieces of the algorithm really deserve to be hardened.
For example, there are cases where you might need to adjust certain algorithmic parameters on the fly,
and those parts of the algorithm should not be fully hardened.
So we've talked about inertial units and Kalman filters on the show before,
and I suspect given that you are fusion sensors, You talked about inertial units and Kalman filters on the show before.
And I suspect given that you are fusion sensors, you've probably got a Kalman in there and you have to do some matrix inversion and some quaternion fun.
Is it like the Kalman would be tuned differently for different sensors, different cases?
Is that what you would leave out of the hardware? Yes. Anything that requires some fine-tuning on the fly based on context would be left out of the
fully hardened logic. But let's focus on for a moment what would be hardened, for example.
So we do lots of matrix math. And matrix inversions, which is a hard math problem. Matrix multiplies, matrix inversions, and those operations take a lot of divides.
Those divides can be done in hardware and software.
And so if we look at hardening, let's say, the matrix math or even having a hardware divide,
that dramatically decreases the number of instructions and speeds up the cycle time for the algorithm.
So that's an opportunity to put certain pieces of the logic into gates at transistor level
and benefit from that while leaving other parts of the algorithm a little more flexible for tuning on the fly.
Well, that's really interesting.
Does it also mean you can parallelize some of the algorithm? Because in
hardware, when you write it in HDL, you can implement multiple instructions at once because
it isn't serial like software is. Yes. And now we're getting a little bit outside of my expertise,
but yes, there are definitely opportunities for parallelism. Matrix math especially lends itself. I mean, that's like using a GPU to do your math.
So that's really exciting.
You just announced the indoor navigation algorithm, and that's available now.
Yes.
When are we going to see some hardening of this?
Is this an ongoing project announced next year, or are you looking for funding?
We've already talked to some potential partners to engage in a project like that,
and we're currently formulating our offering for partners who would help us do that hardening.
So we can expect to see some activities.
I don't know about any announcements, but we can certainly expect to see some activities within the next, let's say, three to six months. Wow, that's exciting. It'd be so neat to have an inertial
system that just worked all the time for almost no power. It's a goal lots of people are working
towards, and it would enable some amazing use cases and end-user benefits. So going back to
what you have now instead of living in the future, because that is six
months off, I can't put it in a product today.
If I wanted to engage with Movea, how does that work?
Because you would deliver a software object file for me to run on my processor, right?
A set of libraries compiled for your processor, yes.
And is it a per unit license or a general license?
It all depends on all the little details.
Well, there are definitely lots of details,
but in general, the agreements are structured very similarly.
There's an upfront license fee and then back-end royalties.
That makes sense. It's a complicated algorithm. I have worked on a Kalman filter, and I can say
I don't really want to do it again from scratch. So I appreciate that you're there when I need you.
Yes, and speaking to that deal structure just a little bit more,
some people ask about the upfront license fee versus the back-end royalties.
And you asked about how customers would engage with Muvia.
Well, in some cases...
It's Muvia? I've been saying it wrong the whole time?
That's okay. You're not the only person.
I'm so sorry. Okay.
The episode in which I learn how to say Muvia.
And that was perfect, by the way. You've got it nailed.
So, you know, some engagements require a lot of work up front to take our algorithms and port them to a new platform
or do some custom development or customizations or optimizations that the customer has requested.
That timeline for doing that development work can be quite long sometimes, six months or more.
And then you see a situation where the coding is done, the development is done,
and our partner, who in this case I'm imagining is a chipset manufacturer,
takes the algorithm, puts it into their chip, and maybe they're doing a new version of a chip.
And that process could take 12 to 18 months.
So the summary of that is we put in a lot of effort up front, and sometimes we won't see any royalties for 18 months or more.
That's only if the product ships, which sadly doesn't always happen.
And that's some of the logic behind the upfront license fee
is to get some cash up upfront to cover the business operations.
You have to keep your engineers in pizza
in order to continue the company.
Pizza and Mountain Dew.
Yeah, it gets expensive.
Well, thank you so much for joining me.
I think we should all get back
to the MEMS Executive Congress
and meet some more interesting people.
Any last words you'd like to leave us with?
I look forward to some exciting announcements
for MUVI at CES.
Cool.
Well, Sam, Dave,
thank you so much for joining me
and I wish you the best of luck.
Thank you very much.
Nice to meet you, Alicia.
I was at the MEMS conference as a speaker,
well, a panelist.
There was an elevator pitch session where tiny companies would give a five-minute pitch to the panel,
and we'd ask questions to help them along, the sort of questions they might get from a venture capitalist pitch.
The panel included Scott Livingston of Livingston Securities,
Tammy Hogue from Intel Capital, Matthew Zuck from Guardian Investment Management,
and me.
I was a little out of place,
but I was supposed to be the voice of industry,
and I sure had a good time.
The session went well.
Not really antagonistic at all.
More like group therapy with a hug at the end.
Steve Walsh was one of the folks who gave a pitch
about iSketchnote,
but let's let him tell you about that.
Last night, you were part of the MEMS Congress elevator pitch session. It took great courage
to stand up in front of everyone and pitch your product, and then let me and a few other
judges slash panelists ask you questions, all in front of an audience of 300 people.
I respect your bravery and welcome you to the show.
Thank you.
So iSketch Notes, tell me about it now that we're recording it.
It's such a neat thing.
It really is.
They use MEM sensor technology.
They've got an array that they have.
It's a very, very thin planar array
and a very simple pen with an angular ring in it.
And the idea is to be able to use the pen and paper of your choice
to create notes, drawings, sketches,
and to have that immediately digitized
and for you to be able to upload that to your favorite social media space.
Right now, the big technical challenges that they've been able to meet
have been using high resolution and multiple pens with it.
But it plugs into an iPad, right?
Yes. We're integrating it into an iPad, into an iPad cover,
but we also are creating an app where you can use it with the Windows-based system.
So it's as though I have a hard backing, and then that hard backing is full of sensors.
Yes. I have a pen, but the pen isn't really very special it's just got a magnetic ring is that
what you said yes it has a magnetic ring we're working with several large manufacturers uh to
build this that we have multiple colors multiple tips ranging from very fine point to large felt
tips i'm i'm i hate ball points and i'm such a penob, so I'm glad you're able to integrate with many pens.
And so I have this hard backing and a pen and a paper, and now I can doodle or sketch,
and it just automatically goes to my iPad or Windows, I think.
And you get to see not only what I drew at the end, but also the process I did to draw it?
Yes, we have an instant replay, so it's recording all of your strokes.
And then when you're done, you have the ability to click and see it all redrawn again.
And I can use any paper?
Yes. So I can use my favorite engineering paper?
Yes, absolutely.
One of the things we found is that artists are very,
people who draw in general do have their favorite paper.
They like the feel of it when they drag their pencil or pen or whatever across it.
And this technology allows them to use that.
There's no special paper involved.
And I can use a fairly thick pad.
I think last night you said 10 inches?
Well, we're able to detect up to 10 inches high in terms of where the pen position is.
But you can use a thick pad.
We put pads of paper over it that are, you know, quarter inch thick, and there's no problem.
Okay, so you're recording front data from these sensors.
You mentioned they're MEMS sensors, but what are they?
If it's got a magnetic ring, they must be measuring magnetic something. Yes, they are a six-axis
technology. It's AMR. It's from
Misty Microelectronics, but it's Honeywell technology that they license from them.
AMR. I bet the middle one is magnetic and the last one is resonance.
Anisotropic magnetoresistive technology.
And we have half millimeter resolution
on it. Half millimeter? Half millimeter. It's really amazing.
And the speed that they've been able to develop, you can pull the pen
as fast as you like across it and it's able to sense it.
So there's no latency. That's fantastic.
There's no latency. You don't have to sit and draw very slowly.
However the drawing is coming to you, you just draw it at your natural speed.
How big is your team?
The team right now consists of six people right now.
We've got experts, an expert in embedded systems
and in signal processing and another one in
magnetics. But we also rely on a lot of the technology that we're pulling out
of the LETI, which is the big electronic, micro and
nano electronics lab that's part of the CEA in France. The CEA
is their At and alternative energy
commission this is the whole team based in grenoble yes except for me north carolina
and so it came out of a school in grenoble it actually came out of the um leti pronounced The LETI, pronounced Leti, is the big micro and nano electronics laboratory
that's part of the big French system of government labs, research labs.
So it's a spin-out of the French government?
This seems strange.
It's an amazing thing.
I mean, I'm very used to the U.S. type of spin-outs,
and the French government does it quite differently.
Within the CEA, there's a program called Challenge First Step
where they encourage their scientists and engineers to take their laboratory ideas.
And over the period of 18 to 48 months,
they actually pay them and help them to try to commercialize the product.
It's a stage gate type of thing where they have to meet certain goals along the way.
But they're trying to figure out, again, do we have a piece of artwork?
Do we have a product?
Or do we have something that can become a sustainable business?
And if at that point in time, between three and four years, they determine that we don't
have a business, then the scientists
and engineers get to return back to their old jobs. At the same time, if they found out that, yes, we do
have something, then they have programs in place that help them get institutional investment
and to start the business. Wow, the French government as an incubator. It is an incubator, and it is really amazing. The technology in the Grenoble area never ceases to amaze me.
So you did a Kickstarter, and you wanted $35,000?
We wanted, our goal was $35,000 to take the prototypes that we have in the lab and go to the next level of minimum viable product with them. And you had early bird specials of $120 and then $150 for people who didn't manage those
early birds.
Correct.
And that would buy them an iPad compatible sketch, iSketch Note, that's the actual name
of the product.
You did okay on your Kickstarter.
We did really well.
I mean, when we first did it, I was very concerned about what would we get.
Would we hit 35,000?
In my dreams, I thought if we made 100,000, I would just be extremely ecstatic.
And by the time I woke up, the day that we had started the Kickstarter campaign, it opened in France.
We had already blown through all our early bird specials. We
hadn't even opened in the United States yet. So within less than 14 hours, we hit our $35,000 goal.
That's fantastic.
Yeah, it was just very, very exciting.
And you made almost 10x.
Almost 10x. We brought in $346,127.
That's an awesome use of Kickstarter.
Yeah, it really is. I can't say enough about how
good Kickstarter was with us in terms of developing
the videos and things they were doing. They were very, very helpful. I would encourage
anybody who has an interesting idea but needs a little bit of funding
to take it to the next level to go to Kickstarter.
Very interesting.
But now you have to build them.
Yes.
How's that going?
It's going well.
We are evaluating different places for manufacturing.
We've got some low-volume manufacturing options in North Carolina,
which we'll be investigating along with what they're doing in France.
Wow.
The French government as an incubator and Kickstarter as a capital source,
and now all you have to do is build them.
And there's so, so much work going from my prototype
to something that you can deliver.
I think we had a show about that, yes.
It was all about how manufacturing is so much harder than it looks.
Oh, infinitely.
Many people just think, oh, I've got the prototype,
and going to build it in volume is easy,
but it is extremely complex so we need to
bring on we need to expand our team we need to bring some people on that can help us with that
so any french engineers maybe you should uh contact contact the show and i'll pass you along to steve
so what's your next step uh in terms of the company yeah uh next step in terms of the company? Yes. Next step in terms of the company is really the whole
manufacturing process. We have a, you know, putting a team in place to do that. Then we also have to
go out for additional funding. Our initial estimates are between three and five million
that we'd like to raise within the next 12 to 24 months. And there are more applications than simply artist sketching.
Yes, we are looking at the technology education sector. We're going to be adding an audio
component to it so that, say, a professor is teaching whatever, whether it's literature class
or a circuits class, will be able to use the technology to create small modules,
maybe in the two to five minute range,
that can be uploaded to a social media site.
Or again, we've been kicking around the idea
of doing something like Khan Academy
and have an ISKN Academy
where people can put their little tutorial modules up.
And because you can play back what you're drawing and have audio at the same time, it should work out really well.
Yeah, if a person wants to just click on replay, replay, replay until they really understand the concept that the teacher is trying to convey.
And I think you discussed a little bit about medical as a long-term goal.
Yes, one of the interesting things that we can do is you could scan in, let's say you go to a
hospital or a doctor's office and they have a pad of their forms that they fill out, whatever those
are. They can scan that in. It would appear on the PC or the Mac or iPad. And then we would create a
reference point on the piece of paper.
The technician or the doctor would just click on that and start filling in the form and you would see it being filled in digitally.
That, I think, is a fairly practical problem for us.
What is the really billion-dollar question is how do we take that and populate a database?
And that's a much more complicated problem, so that would be a longer term play for us.
That makes sense.
Well, I hope you get to try out the longer term play.
Thank you.
Are you enjoying the MEMS Executive Congress here?
It's marvelous.
I'm meeting so many great people like yourself
and other people that are offering me lots of good advice
and being extremely friendly.
That's great.
Would you advise other people to try the pitch session next year?
Oh, absolutely.
Absolutely.
They were extremely friendly.
I've given lots of presentations to investors, and this was the friendliest.
Yeah, we weren't shark tank, that's for sure.
That's for sure.
Well, thank you so much for speaking with me.
I think it's time to get back to the conference and meet some more people.
Thank you for having me.
As for the show title, you're blowing my mind.
We didn't record the person who said this to us.
In fact, I didn't even catch her name.
She crashed one of the MEMS drinking events.
I suppose when you have a conference in Napa, there has to be a lake of wine.
And this hotel did seem to have free tastings all over the place.
I can't blame her for crashing.
She didn't even know it was a private event.
So when she asked what it was, producer Chris and I told her.
MEMS.
Microelectrical Mechanical Systems.
Tiny Sensors and Actuators.
Now, I know I get too excited sometimes.
I like what I do and I love to share it.
Both with willing and unwilling audiences.
She was definitely willing, but I misjudged how many other wine areas she'd already visited.
She repeatedly told us we were blowing her mind because, you know,
memes are cool and so is free wine.
Ma'am, if you're a retired CPA from Boston listening to this,
wondering why you found my
podcasting card in your pocket, well, hello. Before the MEMS conference, though, I went to the IEEE
Global Humanitarian Technology Conference about humanitarian concerns and the technology we can
use to solve them. It was a bit odd. I went to go find gadgets to save the world. I was on a press
pass for EE Times, planning to write about all the neat stuff. Instead, the gadgets I liked best were
the ones that took out the electronics. When food and power aren't available, electronic gadgets
just aren't that useful, and they aren't robust enough in dusty, dry climes with no power. Usually, anyway. Talking to Dave Peter about his
project to simplify and aid in chlorination for clean water. Well, that was interesting,
but largely because it, too, was a simplification of what already existed.
Thank you for joining me. It's my pleasure. Thank you. And you're speaking tomorrow, right? Yes, tomorrow morning.
About?
Well, the topic is a simple algorithm for chlorination control. organization that I've been working with that makes water chlorinators for third world countries
or developing countries out in the bush. It's portable, so they can actually chlorinate or
purify fairly large amounts of water fairly easily with limited resources. All they need is salt,
and they can chlorinate about 50 gallons in a minute.
50 gallons in a minute. That's a lot. I mean, that's more than one family needs.
Absolutely. This is basically a community-based solution. One purifier can support about a
thousand people. I have to say, when you said
chlorination and water, I was like, well, what difference does that make? But when you start
talking about clean water for communities, that's a much easier connection. Well, clean water is
huge for health in general. And I'm going to be showing this in my presentation,
but the World Health Organization has been quoted that, you know,
clean water or sanitation is one of the more effective ways
of achieving global health than anything else.
So, you know, clean water and sanitation are really high on the list of things that need
to happen for communities.
And it's high on the list of things we can actually do something about.
Yeah.
And it's really pretty straightforward.
It's not a big technological reach.
This water chlorination has been, I mean, most of our water supplies, that's how they disinfect the water, is through chlorination.
And so it's been around.
The technology is fairly mature.
And so it's pretty effective.
But the way you're doing it isn't the way that my local water treatment plant is doing it. No, not at all. Most water treatment plants actually have the chlorine imported or delivered
in large containers. And so they feed the chlorine gas into the water stream in controlled amounts.
And they monitor that. And then there's certain processes after that.
So the other nice thing about chlorination is you can end up with,
they call it residual levels.
So there's a small amount of chlorine that's still left in the water,
even during distribution, that helps keep it disinfected.
But it tastes a little funny.
It does.
Of course, I'd rather it tasted funny than had bacteria in it.
Absolutely, absolutely.
And most of the funny taste is from some of what they call disinfection byproducts.
And they're not really, most of those are not hazardous, at least in the levels that occur.
So it's just more of a taste thing.
And actually that's an interesting comment because in these communities,
they're not familiar with that taste or they're not familiar with these kinds of things.
So adoption of it can be a little difficult sometimes.
So what communities are you serving?
What locale? Well, there's been several.
Central and South America, Haiti, and some places in Africa.
And you said it requires salt and then water.
And what's in between those two blocks?
Well, salt is made up of sodium and chloride. So the scientific trick is to
disassociate them. So you end up with chlorine and sodium. And essentially, you end up with
sodium hydroxide as a solution and chlorine gas. And then you just introduce the chlorine gas into the water stream. So it's an electrolysis type thing.
That's just breaking apart the ions?
Yes.
Okay.
And most of us have probably done that somewhere in our high school chemistry class
or something like that.
So although this particular system kind of kicks it up a notch, if I can use that term.
It actually uses some pretty high-tech materials.
There's titanium and some noble metals using as catalysts and a semipermeable membrane. So what that does is that really improves the efficiency of the disassociation.
So it really doesn't take a whole lot of power to create the chlorine.
It's pretty efficient.
And that's important in areas that don't have clean water.
They seldom have good energy generation.
Exactly.
And this particular unit has evolved to basically use 12 volts. So anything that has a 12-volt system on it, you know, a 12-volt car battery or a motorcycle.
Solar panel?
Solar panel, anything.
And, you know, if you've got a bicycle that you can generate some voltage with, you know, that'll work too.
So what about watts?
I mean, how much power does it take?
It varies. That'll work too. So what about watts? I mean, how much power does it take?
It varies anywhere from about 15 watts all the way up to maybe 200.
And it depends on the concentration of the solutions that are involved.
And how do you separate the chlorine gas from the sodium hydroxide? Yes.
Those two different materials appear at the different electrodes.
The chlorine gas is generated at the positive electrode, the sodium hydroxide at the negative electrode.
And then you funnel the chlorine gas over to the water? Yes.
And there's various ways of doing that. This particular method actually uses a venturi tube.
So as the water flows through the system, it actually creates a vacuum and sucks the chlorine gas into the water stream. Are those the things that look like tornadoes when you see them going?
They can.
I only know the children's toy version, so you can say it's a black box.
Don't worry about it. Well, yeah, but most people don't understand.
I would think a lot of people don't understand what a Venturi tube is,
whether it's a carburetor or anything that creates a negative vacuum.
We're embedded software and hardware engineers.
We're not so many mechanicals.
Okay.
Okay, so, and then the chlorine gas gets into the water.
How do you know what the concentration is?
Well, that's where this paper comes in. Presently, the method is you have to check the water like every minute or so to see what the chlorine level is.
And so when it finally gets up to five parts per million, then you turn the unit off and you let the water set for an hour.
The problem is for very large installations, like maybe something with a thousand gallons or
so, that, that can take a while. And, you know, knowing human nature for what it is, I mean,
people get a little lax and, you know, then you start having errors, you know, things,
you don't check it the way, same way every time, or you either over chlorinate, things you don't check it the same way every time or you either
over-chlorinate it or you don't chlorinate it enough.
And so there's a need for some sort of automation.
And there's various types of probes and sensors.
Some go from the relatively inexpensive, such as the ORP or oxidation reduction potential.
And it's pretty similar to a pH probe. And they can get pretty involved. I think there's a thing
called an amperometric probe, and those are pretty expensive. They do a pretty good job of determining what the
chlorine level is, but they're expensive. And all these probes require a lot of maintenance.
They're not trivial to maintain. They have to be wet, and there's a lot of electronics that's
needed to support them. So finicky and expensive. These are not things that we want to put in areas that
don't have clean water. I mean, those are, it's just, you're not serving the right people there.
It's not the appropriate technology. Exactly. So what are you doing? Well, that poses the big
question. You know, is there a way to figure out what the chlorine level should be or is
without all this, all the probes or anything else? And if you look at the basic physics of it,
there's a clue there that, you know, under ideal conditions, one molecule of chlorine
gas gets generated for every two electrons that goes through the system.
And so it's like, well, let's see how far we can push that relationship. And so we ran an experiment
and in all classic experimental methods, you have to make sure the measurement systems are correct
and kind of do your design of experiment, figure out what you're going to do. Good science.
Yep, exactly.
And basically, lo and behold, we found out that that relationship held up pretty good
in light of the variations in concentrations that we see
in both salt and sodium hydroxide on both sides of the semipermeable membrane.
And so we were able to just very simply come up with an algorithm that says
if we figure out how many coulombs of electrons have passed over the membrane,
we can pretty well predict how much chlorine we've made.
And then you need the size of the water input to your device.
I mean, because the amount of chlorine per gallon is important.
Five parts per million is what you said.
Exactly.
So part of the equation is you put in, okay, I've got a 55-gallon drum,
and then it will tell you when you've reached five parts per million in that quantity of water.
Okay so you you have a known quantity of water. Yes and that's pretty typical. Most of these
installations will have you know standard containers like I said either a 55 gallon
drum or a thousand gallons or something like that. And then you calculate how much chlorine you need
and you produce that amount of chlorine and it will much chlorine you need and you produce that amount
of chlorine and it will diffuse through the water and you don't have to measure it in the water
anymore. Exactly. Now, good engineering practices, you know, on something that critical, you always
have backups. So we haven't totally eliminated the testing phase.
Our standard operating procedure is, you know, the equipment will tell you, says, okay, we're pretty sure we've reached five parts per million.
Check it.
If we're there, then all you got to do is wait an hour.
But that's one check versus many, many checks.
Plus, it's automated.
I mean, it shuts itself off. One of the big problems was people would not pay attention.
And they get over-chlorinated.
Well, not over-chlorinate, but you'd burn the equipment up.
Oh, that would be even worse.
Well, yeah, because then that community or that group of people,
you know, have lost their chlorination potential.
So how much does this all cost for a community?
Well, the chlorinator itself isn't too expensive, but there are other costs.
You know, typically this organization has transportation costs and they'll install it. Typically it's about $2,000 to go over, train the people, install it, and get everything going.
And is there maintenance associated or is that part of the training that they get maintenance?
Yeah, that's part of the training, yeah.
And the goal is to make them self-sufficient.
Exactly. You know, all they need to do is go find the salt, and they can chlorinate as much water as they can.
So it's a lot cheaper than chlorine.
Yes. You know, from country to country or across borders and stuff.
If you can do it, and that's almost a big if.
You still have potholes in the road and I don't want a poorly repaired vehicle hitting a pothole and decimating an area because it releases.
Right, exactly.
So it's much safer to transport.
Oh, absolutely.
I mean, while there is no transport of chlorine, it's just salt.
Exactly.
So I jumped right into the project because I'm interested in this
because it's kind of an ingeniously simple solution.
Yeah, exactly.
And one of my comments is I'm not so sure that every engineer
doesn't have a Rube Goldberg gene in there somewhere.
Some do a better job of suppressing it than others,
but to me, the best solutions are the ones that are most elegant,
the ones that are really simple, that get the job done
with a minimal amount of components. And so I'm pretty excited about
this one. It's really simple. And I think it's going to be pretty effective.
So give me a little bit about, you said your organization and yourself. I did not properly ask you to introduce yourself.
Oh, okay. Well, I've been an engineer for a long time, ever since I was probably a toddler,
I guess. Engineers are born. Yeah. And worked in industry for a while, decided to kind of
go out on my own for a little bit.
But I came across this one organization in southern Indiana called New Life and was really struck with what they were trying to do with water purification.
And they were trying to figure out these problems of automation.
I said, well, I think I can help. So we jumped in, and it's kind of been a little bit of a longer journey
than I would have anticipated because a lot of these sensors
really didn't work the way we thought they would,
and that's kind of what triggered this new or this second generation,
if you will, approach of just doing something very simple.
And so we're pretty happy where we're at with that.
What's the deployment timeline?
Well, right now we've got an algorithm that works pretty good,
and we're putting together 10 beta units. We had to do a couple iterations
on the interface. Since we're measuring current and we're integrating it, you have to be a little
careful how you do that. Zero offset errors can really mess you up. It's that plus C that you get with integration.
Exactly.
But we think we've got those problems licked,
so we're currently in the process of getting the circuit designed
and getting 10 of them built,
and we're going to get those built and deployed hopefully before the end of the year.
Excellent. Where's your first deployment? I don't know. That's going to be up to New Life.
All right. Well, we should get back to the conference and see what everybody else is
talking about. Great. Well, thank you so much. Thank you for talking with me. No problem.
That's the show for this week. I'd like to thank all of the people who took time out of their busy conferences to speak with me.
John Ivo Stavuric of Jawbone, Sam Guillaume and David Rothenberg of Muvia,
Steve Walsh of iSketchnote, and David Peter on Chlorine Control.
I'd also like to thank both the IEEE Global Humanitarian Technology Conference
and the MEMS Executive Congress staffs
for putting on such enlightening and informative conferences.
Finally, this show was much harder to put together
to make sound good when I was in noisy places with fussy microphones.
Thank you to producer Christopher White for making it better.
And you, well, did you like it?
I don't know that this format is going to be a regular thing,
but let me know if you love it or hate it, or you just want to say hello.
Hit the contact link on embedded.fm.