Embedded - 225: When Toasters Attack
Episode Date: December 7, 2017Maria Gorlatova spoke with us about how the combination of devices and cloud computing will change the world as we know it. Maria’s bio, blog, and LinkedIn page. Other topics: Federated Learning f...rom Google AWS Greengrass from Amazon Black Mirror from Netflix Note: we really should have talked about Amazon and FreeRTOS. I heard another podcastmight have mentioned it. We’ll try to get more info soon.
Transcript
Discussion (0)
Hello, this is Embedded.
I'm Alicia White alongside Christopher White.
We are joined this week by Maria Gorlatova from Princeton University.
Hi, Maria. Thanks for joining us today.
Hi, Alicia. Hi, Chris.
Could you tell us a little bit about yourself?
Sure thing. So I am us a little bit about yourself? Sure thing.
So I am currently a research scholar at Princeton
where I work on an exciting area
that's currently known as fog computing or edge computing,
bringing computing closer to the endpoint devices.
My background is in the Internet of Things in general.
My PhD is from Columbia in developing ultra-low power energy harvesting tags for the Internet of Things.
And I have several years of industry as well as academic experiences,
spending actually a wide range of developments from research to business strategy.
Excellent. We have plenty to talk about. Before we get more into that, I want to do lightning round where we ask you short questions and we want short answers.
Of course.
And if we are behaving ourselves, we won't ask you why and how and all of the other questions.
All right.
What science fiction concept do you think will be real in our lifetimes
oh so many of them basically you take the black mirror videos and half of what you see there
will fortunately or unfortunately be real drones for example are up and they're up and coming
within the next few years so is this idea of very responsive spaces and augmented reality, of course,
that's going to be huge over the next 10 years.
What's your best marathon time?
4.04 and I'm working on cutting that down to 3.45.
That sounds probably 12 times faster than I could manage it.
What is your favorite embedded processing platform?
Oh, that is an unfair question.
It's something ARM-based, I guess.
That's a safe answer.
That's pretty broad, though.
What's your favorite programming language? I guess. That's a safe answer. That's pretty broad, though.
What's your favorite programming language?
I have to say Python.
If you had to choose one, research or teaching?
Research. Research.
I'm a researcher and working with students, teaching students,
one-on-ones while conducting research.
Yeah, I can see that.
Alright, so now for something
longer. Can you describe what a
typical work day is like for you?
The way that
researchers like myself work, we
have a combination of things that we do hands-on, in one-on-ones and in larger groups.
So my typical day has a mix of all of that.
There is some element of working by myself, as writing papers, for example, or doing experiments or developing something. Then there is a part where I work with students, help students with their next steps and next directions.
And then I work on several big projects.
So projects that, for example, span multiple universities and some industrial organizations
or OpenFog consortium that spans many dozens of organizations across the world.
So on any given day, there is an element of that as well.
A call with a lot of people or some element of planning with a large group.
So really a mix of experimental work development as well as working with other people.
Sounds very normal.
It is indeed, yes.
It's, yeah.
Okay, so what are you working on now?
I mean, what is the OpenFog Consortium?
Yeah, what are you working on now?
So I'm currently working on the space that's called
fog computing or sometimes it's called age computing and this is bringing computing
capabilities closer to the end users so taking what was developed for the data center and bringing it
closer to the embedded devices so So, for example, bringing computing capabilities
towards your local gateways
or bringing a computing box into your house
or into your region.
So this idea of distributing
traditionally centralized capabilities
down closer to the end users.
This is the idea of full computing and edge computing.
And it's really transformative in a way for IoT nodes as it promises to bring additional intelligence and additional reactivity to the IoT. of it, on characterizing it, on figuring out how to use it, for example, for distributed
machine learning or for making IoT devices more responsive.
On the consortium side, we are working on standardizing related fog computing architectures.
OpenFog Consortium is a very large nonprofit comprising about 50 organizations in industry as well as in academia.
We've released a reference architecture for fog computing platforms
and we're currently working on developing an IEEE standard
that will get the entire industry going towards a standardized fog computing approach.
What is this reference platform for fog computing?
So we've developed a reference architecture that we've published earlier in this year, in February.
It is publicly available on the OpenFog website, and it outlines the vision for fog computing from the business side as well as from
the technology side. This is, it outlines the important what we call pillars or principles
of fog computing, the key aspects of the architecture that all the different players in the space need to follow
to develop solutions that are appropriate for Fog,
distributed, scalable, secure.
The reference architecture was released earlier this year,
and currently,
NIEE standards working group has been formed
to develop a standard for fog computing architectures.
The reference architecture will be one of the inputs into the standard, but of course, it being an IEEE standard, many companies in the space are welcome to contribute.
We plan to release an IEEE standard by April of the next year, basically.
And so this isn't a specific, you should use a Raspberry Pi or Cortex-M4F sort of architecture?
Yes, this is an IEEE standard that describes the functionality and the requirements rather than specifics of an individual platform.
How can it possibly do that when edge computing is such an enormous, I mean, it's a huge field,
it covers farming, it covers drones, industrial, these are just hugely different environments and
hugely different requirements.
Yes.
So the way that we're taking towards standardization is to identify one global approach and then to identify, create substandards that cover some specific areas of the space.
This type of standardization approach has been tried in medical devices, that there
is an overlying, what they informally call an umbrella
standard, and then there are a number of smaller standards that concern
specific elements of the space. Indeed, the use
cases, there are several many different use cases for fog,
and we've outlined a number in the reference
architecture and are currently working on specific use case definitions that address some parts of the space.
For example, I'm working with several other people.
I recently helped release a use case that outlines fog and autonomous driving, this vision of connected vehicles and connected cities,
the vision of how fog enables a fully connected transportation experience, basically.
So you're defining the architecture and laying out some ideas for how it should be implemented at a high level,
not at a very tactical implementation level. Is that right?
That is correct, yes. A lot of the work on currently given the state of the space,
a lot of the work goes in that direction, yes. My own hands-on work goes, of course,
more towards the technical details, towards some specific elements of this. It's hard because some of these things we are getting now,
I mean, we're seeing many of the edge computing devices happen
and we're getting distributed sensor networks, but they are disorganized.
And how are we going?
What is the path to go from, okay, ship it now,
to let's ship something that works together?
Yes, that is a very, very difficult question.
I think that's the overall state of the IoT right now
is that we are in a space
where we are creating a fragmented field
where getting things to work together
is extremely difficult and extremely challenging.
What makes me personally excited about OpenFog is that it represents many of the leaders
of the space.
So Intel is there, ARM is there, Microsoft represents the cloud computing side, Dell,
Cisco, Hitachi, many others.
What I think is very exciting about the reference architecture is that it represents an agreement amongst these companies as to what the architecture is supposed to look like.
You do, of course, always have this pressure to ship immediately, and there are some companies that are very keen on creating their own solutions.
So that is definitely one of the challenges in the space.
One of the things we hear a lot about with the edge computing
and embedded systems that are connected to the internet is security
and how that is, wow, that's going to bite us and it's going to bite us often.
I mean, it's just, it has the potential for some serious disasters. And we've had some
botnet attacks from televisions. It's insane. And when our toasters attack, what's going to happen?
Is this one of the ways you're convincing people to join and adopt the standards by helping them
sort out these very common problems like security?
Yes, security is actually a very big part of the work that the consortium is doing.
There are pages and pages of specifications and discussions that span the world as there
are different standards related to security in different parts of the world.
What I think is some of the interesting elements of security in edge computing,
they're not so much in what we immediately see as to where we can take us in the future.
I think what's very exciting, there are two very exciting elements of fog computing
that have promised to make IoT secure as a whole in the longer term.
And one of them is local processing of data.
That if you, with edge computing and with approaches like federated learning, for example,
you no longer have to transmit all of your data to the cloud.
You can keep your own local data locally while still performing complex cloud operations.
This means that this has the potential to fundamentally improve our privacy,
that we no longer will have situations where, for example,
videos that are captured locally are available for everybody on the cloud when something is hacked.
This is a little bit further down the road.
There are some deployments like this right now,
but this is more of a research three- to five-year plan
more than right now.
The other very interesting element of edge computing
is that you can run complex intrusion detection algorithms locally
on edge nodes. We currently have some of this work going in our lab, and I think a number of
other people will as well. Ultimately, you are currently without things like fog and edge,
you are connecting immediately insecure devices right to the cloud.
But Edge, what you can do is you can put this smart middle box
that will be able to contain an attack, for example,
or detect locally malicious behavior.
I'm very excited about this.
I think that this will bring IoT security closer to the level of enterprise security
within the next five to seven years.
What is this smart middle box?
What you can do with fog computing is that
you can put on your local edge node, for example,
you can put there security modules
that would be, for example,
looking at what's typical of the local traffic
and filtering out what's not typical
or performing some local anomaly detection.
So the smart middle box is a functionality within the edge node or a fog node.
So is that a functionality that would edge node or a fog node.
So is that a functionality that would go into my router or would it be?
Your local gateway, your local computing device, yes, pretty much.
Or some sort of home IoT hub.
Right, that's exactly right, yeah. That was my other question.
Is it the Philips Hue hub that goes and converts between internet and whatever the hues
speak yeah for example yeah so you can um yeah that box theoretically speaking working with the
cloud together this box uh will in the future be able to run complex security-related algorithms. And I think this is one of the very promising
angles to
keeping the IoT secure
in the longer term.
So I have kind of a higher-level
question. Is
the drive for this
is it
a purely technical
capability drive, like, oh, now the
embedded processors are so much more powerful than they were
five, ten years ago
that we can push more functionality
down to the edge?
Or is it a necessity thing?
Like, well, the security model
we have right now for IoT
just doesn't work,
and moving more stuff.
So is it just a happy coincidence
that both things are converging?
Or was there a drive before to say,
well, we don't quite have enough computing power um but what can we do to to start addressing this problem
i i think there are several uh converging trends uh in this space uh so uh one um
uh but the embedded device is becoming better but the drones and augmented reality and virtual reality,
these types of high-end applications,
you really need computing available at the low latency.
So one of these drives is this kind of existential scenario
that things like augmented reality, for example,
they just cannot move much farther than they are right now unless there is a local computing capability.
The response time from the cloud is just too high.
So that's one of the patterns. Another one is that as the IT starts scaling,
we just cannot keep sending everything down to the centralized network.
It becomes too expensive and it becomes very costly for network designers as well.
So the more for companies like Comcast, for example, the less their network is loaded, the more control they have over their network, the better it is for them. keep up with this coming flood of information
and with the coming avalanche of IoT devices.
Some of them produce very high-volume, high-velocity data
while still keeping the network side of things manageable.
So part of this is a networking problem.
Some of it, yeah trying to put uh nodes on the network that can serve data instead of having a single computer being the
final source for the data right right right yeah so the data and content as well. So, for example, for things like virtual reality,
for example,
it's not that you...
Let's say that there are a few of us
in one place
who want to enjoy a similar experience.
It doesn't make sense
for all of us
to keep reaching for the information
far away.
It makes more sense
for some of the content
and some of the processing to be done
closer to where we are. It's weird to think that there's processing being done not where we are.
I mean, this is the cloud. I mean, part of me is like, okay, this this fog things is odd. But
really, it's the cloud part that's completely bizarre yes i mean yes
the cloud is just somebody else's computer yeah this is this is the reversion to normal
yes we somehow got used to this idea that for us to do something locally the signal has to travel
to seattle and that is somehow okay for us but i i But I think what we were able to do with the cloud
is solve a lot of actually very important challenges of
computing. How we distribute workloads, how we ensure
availability, how we ensure
safety. And so one of the challenges of Fog is to be
able to capitalize on all of those
developments while keeping the computing distributed. It's funny, this kind of reminds
me of a swing back and forth that happened, and continues to happen, I think. But when I was first
working, the workstation I got was an X terminal and it went back to some
larger computer server.
In fact, I had one at home
and it would go over ISDN
and I would do most of my work on that.
And then they replaced those with workstations
that had enough computing power.
And then a few years later,
the whole Citrix thing happened.
So we've been going back and forth on this
and now it's happening to consumer devices.
I have no point. It's funny.
Yeah, there's a pendulum, they call it,
in the distributed systems community.
And you talked about the avalanche of data and IoT devices,
and part of me living in the bubble that Christopher makes for me
that has all of our devices connected to all of the other devices.
I'm like, how can we get more connected?
What is this avalanche of data?
What are we going to be seeing?
Basically, currently, the amount of video
that is being produced is at the level
that can really start to overwhelm networks. Once you install enough 4K video cameras in
different places, you will not be able to transmit this raw feed to the back end. That becomes
prohibitive. A related area is things like smart cars or drones, they produce enormous amounts of information, a lot of it having to do with video or other sensor data that is dynamic, that is changing fast.
Video can keep getting better and better as well.
So there's a bit of a race there, I think, in terms of how much sensing data you can generate and how your network can support it.
I've been taking a little bit of machine learning, and so I've been looking more at videos and how intelligent devices can be with object identification and image segmentation.
So you're saying that instead of sending back a 600 by 40 pixel, 24 bit color image, we're just going to send back and say, oh, there was a dog.
Yes. Yes, and in addition to that, you can create very interesting scenarios
where what you immediately send to the cloud is that there is a dog.
But if you want to investigate this further,
then you can query local devices for additional information.
Somebody in my lab is working on a very similar idea for cars,
that a car camera sees an object that,
if the object is not of immediate interest,
you just say that you saw this object and that gets stored.
On the other hand, if an investigation is required,
if the object starts becoming interesting,
then additional processing is performed in the mix of local and cloud conditions.
Why do we need cloud conditions for driving? That seems like a single system thing oh for for things like for
example uh querying this centralized data that is collected not not for the immediate actions
of driving not for uh the control of the car but more for for example uh, processing the information that is collected from a number of cars.
So is this like when there's a large debris in the road and after five cars have seen it?
For example, yeah.
Then it can be dispatched to be cleaned up?
For example, yeah.
Or for training, right?
If you're getting a lot of data from a lot of cars you
can improve your algorithms but you don't want to do that on the cars yeah that's yeah that that is
one possibility or uh they uh there is a use case that that is talked about quite a bit this is this
element of uh um detecting bostonbers that you
had to have the feed
from many different people's cell phones
and many different cameras to
get the whole idea of the situation.
That's not terrifying.
Do you think we will have
fully self-driving cars
soon?
And I am going to let you define what soon is.
So I have a bit of a personal angle on this,
is that self-driving cars tend to come from places
where the weather is very good, and not from Canada.
So I cannot see in anything that is happening right now
how a car could navigate a winter storm in Ottawa, for example.
I like the story of autonomous cars and autonomous driving scenarios first appearing in things
like, for example, trucks, in truck convoys in specific places.
The fully replacing a human driver on a global scale, that's probably some number of years away on the other
hand a lot of current developments and making cars smarter are of course very
exciting so while I don't necessarily see us replacing human drivers entirely
in the next 10 years say I think there is a lot of cars and smart cities working with cars can do to really improve
everybody's experience and safety and the flow of traffic in the city.
So I'm definitely, I'm very excited about the space, but I do not quite see us getting
to the fully autonomous level 5 autonomy state
within the next decade.
Certainly not in every situation.
Definitely, yes.
But when autonomous cars
were first talked about, based on
a Canadian experience, I was entirely
skeptical because I just could not see it
happening.
It is quite a bit easier in California.
Yeah, we might have weather control before.
How does the cloud play a part in that?
Are we going to get convoys before we get level five?
Well, so in terms of convoys, I think that we will get Convice first, yes,
because there is a very good use case for them,
and because you get a lot of the things that are already figured out for them.
The role of the cloud is more on learning coordination on the longer scale,
as well as this idea of kind of offline intelligence.
So cloud is a big part of the story,
but not for an individual autonomy, I think.
That makes sense.
And you mentioned offline learning.
Is this where something has gone horribly wrong and it takes the data and then tries to use that data to improve everybody else's experience? Is that what you mean by that? simulations, for example, that you can run simulations,
what-if scenarios, just a lot of training that you
would not be able to do in local conditions.
Some of this is kind of what is already done, though,
with mobile phones, that a lot of learning happens in the cloud.
Yes, that makes a lot of sense. I mean, I take a picture and
it makes more sense if that goes to the cloud for identification than it goes,
than it's on my phone for identification. But then when I start worrying about privacy and things,
it goes back to my phone because now the big cloud has trained its parameters well enough that it can send them down to my phone
and the inference can be done.
That's exactly right, yeah.
I should actually define some of those terms. Inference is when you run the machine learning
algorithm feed forward. You're not training it, you're just using it.
It's one of those words that I really like,
but I always forget to define,
and then 10 minutes into a conversation,
somebody says, what?
And so there we go.
That's inference.
But on the big cloud,
you need the big cloud to train your data
because it's an incredibly computationally intensive thing.
And so you can't do that on your phone as easily.
There is some research these days
and some developments into distributing learning as well,
distributing the element of actually generating the models.
And this has to do with privacy,
that you may not necessarily want to transmit
your raw data to the cloud.
So there's a very interesting line of work in using your local devices to do part of
your local training while using the cloud as a purveyor of global data sets, as well
as a coordinator for these local operations. So this is like when Facebook said,
if you feel you're a victim of revenge porn, send us your pictures unclosed.
And so instead of that, maybe we can just send them the training weights.
Maybe, yes. That would be a very good scenario. Yes, indeed.
How do you distribute machine learning like that?
It's a totally new concept to me.
There is a line of work in this space that is called
the refrigerated learning.
It's coming out of Google.
You have to manipulate the details of your learning algorithms. There are specific ways
that you can
use your local partially trained
models and combine
them globally on the cloud. It goes into
the depths of
the mechanisms
of stochastic gradient descent.
Okay.
And that's how you do the back
propagation usually. There are other methods, but that's the
most popular. Yeah, that's right.
And how is your work focusing on this whole
stack of cloud to device different
than maybe focusing on just the cloud or just the device?
So what's happening right now in the more academic part of the space
is that so far the embedded people and the cloud people
really do not talk to each other.
Yeah, it's worse than hardware and software engineers.
Even within a single company, that's true.
That's true, yes, very much.
So you have this issue of there's no common language.
There are no common abstractions.
There are no common even understandings of what it is that different devices are supposed to be doing.
If you try to actually fundamentally address the question
where I should be placing a particular type of functionality,
that is currently pretty much impossible to answer. You have to
do experiments, you have to do engineering design towards it.
One of the longer terms, how this space can really improve how we can all start moving forward
towards more integrated solutions is to start thinking, starting from device and to the cloud together.
Ideally, we do not want to,
for every single functionality that we develop,
we should not have to experiment with its placement.
We should just be able to specify a set of parameters that are important for us
and get that placement computed for us automatically.
Going towards that vision, though, this really requires
looking across both devices and the cloud, and this is something that
is very new
and something that needs to happen
but is extremely challenging because of this
heterogeneity and mismatch between
iot and the cloud it's difficult because on one hand we want our edge devices to be
usually embedded which to me means resource constraint sometimes they need to be cheap
sometimes they need to be power efficient yes yes sometimes they need to be cheap. Sometimes they need to be power efficient. Yes, yes. Sometimes they need to be physically small.
And so you use those constraints and then you do the best you can with putting as many features out there as makes sense, given the constraints.
And everything else goes back to the cloud as long as you have a good connection or some sort of connection that can get back.
Is that the sort of thing you mean?
I mean, for me, it's very tactical.
I'm very much an industry person who spends my time trying to figure out
how to make those resources be as constrained as possible
and as power efficient and cheap and small as possible.
Is that what you're looking at or is it different?
Yeah, pretty much.
So the question of what goes into your endpoint device, what goes in your
HNode and what goes in the cloud, that is actually a very interesting
question to answer as you can make many different engineering trade-offs along
the way if your if your local device is fairly capable and it is within the reach your local
device meaning your your edge node for example if it is within the reach of your embedded device
that a lot of the functionality of the embedded device
can go to the edge node.
Or it can also go to the cloud, local or remote.
And where it fits best is an interesting question
which has different trade-offs associated with it.
Power consumption, size of your endpoint device,
security comes into play as well.
Cost, whether you want to transmit things to the cloud at a cost or you want to solve them locally, which could be cheaper.
And so you have three layers in your model.
You have the widget, the device, and then you have the edge device.
And so if, for example, we use Fitbit, the tracker is the device, and then your phone
is usually the edge node, and then that goes back to the cloud.
And if I use the hue lights, the lights are the devices, and the module is the edge node,
and then the cloud is the cloud.
I guess it uses 802.11 to go to the cloud.
Is that the right architecture?
Sometimes we go direct from device to cloud.
So what is currently imagined in a lot of fog computing work is inserting additional layers as well.
So having your endpoint device as your device, there is your local edge node.
But then there could also be other nodes that are between the local edge node and the global cloud. So, for example, your local CDN can serve as one of these
intermediate layers. In a smart city, for example, you can have computing boxes that are
in your home and on the street corner and in your zip code, such as, for example, the point of presence for somebody like AT&T.
So while we are starting with a three-tier architecture,
the movement, I think, is going more towards multi-tiers in the future.
There's currently work, for example, out of Amazon. Amazon released a functionality that allows you to execute some computing on CDNs, which traditionally only served content.
They only served images, for example.
But now they've also added the ability to perform some computing there. And I think this is a trend that we will continue seeing,
this ability of not just being able to place things locally or on the cloud,
but having multiple options around you, basically,
in different physical and logical distances from the end device,
as well as with different trade-offs such as power and costs.
And so when you talk about fog, you were talking about this whole thing,
from cloud to however many layers it takes down to the end point.
Is that right, or are you focused on...
I think currently the terminology is...
It's an emerging field, so things are changing.
Terminology currently is starting to standardize
on calling this three-tier architecture edge computing
and calling the more multi-tier architecture fog computing.
Okay.
And then that was actually...
Where I was headed was, what are the terms? And then that was actually where I was headed was what are the terms.
And then there's the Internet of Things, which is a phrase I hate so much.
And I wasn't the person who came up with security is the S in IoT,
because I have said that, but I shouldn't take credit for it.
I don't know where I got it.
Yeah, there is no S in IoT.
Anyway, does that have a more formal definition
or is that just where all of the vendors have chucked their fog
and edge and embedded computing things into one pile?
You mean the Internet of Things?
Yeah, does it have a formal definition?
I don't know.
I think it's a global field that is defined fairly loosely.
Okay, I agree.
I don't think there is a...
What are the most important technological issues
regarding the whole fog computing?
Is it technology or is it people?
I guess that's a two-part question.
Maybe a 12-part question.
Yeah, from the technology point of view,
ensuring, so there's a lot, of course,
ensuring that you,
so one of the strong points of cloud
is that cloud works. If the strong points of cloud is that cloud
works. If you can reach the
cloud, it will do what it
promised. There are
mountains of engineering expertise
that went into making that happen.
There are special provisions
for things like redundancy,
safety, backups,
you name it. It's mountains
of research work, mountains of engineering work
to that all of that work was done under the assumption that you have a data center where
you have multiple thousands of redundant machines that are close similar to each other
and that data center is under your physical control.
To take all that and distribute it to heterogeneous devices, to regionally distributed devices,
to devices that are in physical control of different people,
that is extremely challenging.
There are a lot of elements to that that require non-standard solutions.
So I think this being able to provide services that are as reliable as the cloud in Fog
is going to be, it's definitely solvable given that this has been done for the cloud,
but it will require a lot
of engineering creativity a lot of engineering work and the other element
is just making use of fog so for things like for example AR or even improving
mobile experiences from mobile phones let's let's say that you have this local node.
What really is the best way of using it?
Is it just to offload your computing, or is there a way of maybe restructuring your computing
to make your user experience even better?
Things like what I was mentioning before, using edge nodes for security, for anomaly detection, that's taking local traffic patterns and local device patterns into account to make this possible.
It's very promising, but it will require a lot of work.
From the point of learning as well, there's clearly a promise,
there's clearly a lot of potential,
but to go from this potential to actual solutions that we can all use,
that will take a lot of work.
From the point of view of people,
you've hinted at that before, that one of the challenges is that there are many different players in the space who have
different interests. And there is definitely this push of getting
something to the market quickly. That is, of course,
a challenge that needs to be addressed.
Yes, of course.
I mean, and there are so many things that drive that.
Some of it is profit.
Some of it is availability of new technology.
The increase of processing power at the embedded
systems, at the devices is simply incredible. And balancing this idea of, okay, now there's
going to be an IEEE standard for this thing that I can just skip around. I can just, I can BLE to my phone and
have my phone be the edge piece and poof, I'm done. How do we, how do you get people excited
about adding layers? How do you convince them this is worth it? Well, I think it's not that you are aiding layers.
It's that if you can BLE to your phone,
that is an example of a fog deployment as well.
So it's not that you are creating layers it's that you are enabling different
elements to work together
at the selling point for for this for the kind of standardized approaches here uh it's the same as a
selling point for the standardized approaches
in what was the internet.
There is a set of
common approaches that
allow everybody to build
distributed systems
on top of it. And you can play
in your local settings
but then
you are forever restricted to
your own approach as well.
If different organizations take a somewhat similar approach, then we can create a global substrate, if you will, that application developers can build on. So it's less about adding layers above my device and more about adding intelligence away
from the cloud? Pretty much, yes. That's an easier sell, I think.
How do you think all of this will impact consumers in 10 years?
What I think, the IT systems that we have right now,
basically, we talk a lot about smart systems, smart objects.
They're not actually smart right now.
They're mainly just connected.
Your smart object is an object that is connected to the internet there is no real smartness to it what is becoming possible right now is to bring actual
intelligence adaptiveness have actual responsive spaces that truly are smart, personalized, that respond quickly.
I think that is a huge transition that we already are on that road
with connecting devices around us, but we only scratched the surface.
We are only at the very beginning of what can be possible very shortly.
Can you make some examples?
Yeah, definitely.
So, for example, currently what your Fitbit does, right,
it just counts your steps.
Counts your steps, counts your simple...
It says more than that, but yeah, okay, we'll go with that.
Yeah.
What your devices can do is actually,
so for example, give you feedback on your performance.
What is, given where you are right now,
what should you be doing next?
If this is the exercise that you're doing,
then will you be more efficient if you change your gait?
Will you be doing better if you now find a hill to run up on?
Or will you be given your personal history and your goals, should you be doing something else?
This is an example of a smart, adaptive, personalized experience with a device.
It's the personalized that's the exciting part.
Yeah, personalized.
There's also this element of having different devices connected with each other.
Currently, a lot of experiences with embedded devices are actually completely disjoint.
Even like AR, for example, I can theoretically play with another person.
We can create a common experience, but it's very, very rudimentary. Once we can actually
create experiences that are more interactive and common to different people, it's just a very different set of experiences even.
Yes.
And, okay, so you mentioned augmented reality a couple of times,
and I keep wanting to ask more.
What technology there, what are you using to get augmented reality and what do you think
you will be using in five or ten years so augmented reality currently comes in
a couple of flavors this is a phone based augmented reality and headset
based a phone based is like playing Pokemon. Pretty much, yes.
And headset-based, that's HoloLens, for example.
They're both very excited.
I'm more excited about the headset-based technology,
and I'm also very excited about the direction
of building augmented realities into cars.
So building this experience into a car windshield, for example.
Then we can play driving games.
We could.
Where this is becoming very interesting right now
is that the headsets are already there but the help of
of things like edge computing you can make them much better you can make them lighter you can
make the batteries last longer and you can enable interactive experiences between
multiple people also secure them you can you can really take it from the point of just more or less a very interesting,
a very cool prototype like HoloLens into a practically useful,
commonplace, scalable technology.
I'm very excited about the capabilities of edge computing to do that. So when I think about augmented reality and cars,
I have this idea where the car windshield would highlight things that I might otherwise miss.
The bicycle next to me or the pedestrian who wants to cross the street, and they would become red
instead of if they were wearing gray or it's night and they're wearing black. And they would become red instead of if they were
wearing gray or it's night and they're wearing black. And I would be able to see them far more
clearly. And possibly on the other side, it might let the pedestrian know that my eyes have looked
at that object. I have looked at the object that represents the pedestrian and it turns something about the car green so the pedestrian knows i know he's there
is that is that all plausible is that coming can i have it tomorrow
uh i don't know about tomorrow but then within the next five to ten, that's definitely where the technology in AR, but also in smart cities,
this is where it's heading.
The ability of different participants in transport to communicate with each other,
that's what you need a cloud or fog for the ability of not fully automating the driving, but making it safer.
That's an excellent use of increased capabilities for in-car nodes, as well as for the elements
of cloud to device cooperation.
This is extremely exciting and very interesting.
I haven't thought about the angle of giving the feedback
to the cyclists like this before.
That could really work.
Having been in San Francisco recently,
where I was both a driver and a pedestrian at different
times, either one was terrifying. I mean, it's a free-for-all. Christopher, do you have any
questions? Yeah, well, I was, to kind of come back to square zero, I'd like to get your opinion of where say you have um you're developing a new product you're
an embedded engineer or somebody who's influencing the overall architecture of the product and it
might have some cloud uh cloud portion what would you suggest is the best place to go look at what
what's the current kind of recommended architecture,
whether taking Fog into account or conventional standard IoT things.
Where should somebody start looking at,
okay, I need a new architecture today.
What's the best practices?
I don't want to give endorse endorsements here so yeah basically uh currently
uh there are several cloud providers that are all very interested in that specific business
so all the big four cloud providers are trying to offer options for engineers in this space.
Arguably, Amazon currently is farther ahead than others.
And this is just my own opinion from what I have seen.
It seems to me that they offer some ready-to-go architectures for IoT
that are somewhat farther ahead than IBM or Google or Microsoft.
That being said, all of them do go in this direction.
So, for example, for offering edge computing functionality,
Amazon recently released this what they call greengrass architecture,
AWS Greengrass.
Microsoft has a solution that goes
in a very similar
direction.
I'm reluctant to give
a specific recommendation, but
let's just say
that when I went looking for a solution,
I looked for AWS
and Microsoft.
It's interesting.
I don't think about either one of those having edge or device,
I don't want to say capabilities,
but I wonder if that means that one of them will be coming out with,
here's a platform, here's an example example kit and here are some sensors and some features
and some ways to use our super duper cloud computing out there on the edges so currently
it's aws greengrass that's what aws released last. They have partnerships with
embedded vendors as well. There is an ecosystem
that they are building around it
that says
which devices
can and
there's a set
of guides.
Well, this is something
I need to go look at.
Neat.
Before we close the show, Maria, I have a couple of questions.
First of all, how did you get into marathoning?
I needed a release.
So I was not sporty as a child.
I was not sporty in college. When I got to grad school, I realized that I need a way of seeing progress today from the work that I put in today.
As research is very, it's this very long-term thing where you work very hard for months and you do not necessarily see progress yeah so i was looking
for something that gave me a shorter term feedback and also that cleared my head so
throughout grade school i came from i went from a couch potato to an iron man runner
that's amazing how did you how did you convince yourself to do this? I mean, part of it is it has to be a habit. You have to do it every day or at least three times a week. But how did you start? How did you get over the hurdle of this hurts and I don't like it? That's an interesting question.
I think I enjoy the experience.
I enjoy just being out there.
So that makes the
pain of
how the difficulty
of raising your
volume worth it.
I also had a couple of very good friends
who were
marathoners and triathletes,
so I could see I had good advice.
I had good suggestions.
Yeah, this goes back to your parents saying don't hang out with a bad crowd.
Because the truth is people you hang out with do matter.
And then the other question, you take improv classes.
Do you still take improv classes?
Unfortunately, I took them for two years while I was in New York City.
Currently out in New Jersey, it's a little bit far.
It's a long drive. I will take them as soon as I'm in a place where it is feasible with my schedule.
I just love it. I think improv is one of the most understated,
it's the most understated way of doing many things.
It's excellent for developing trust in other people.
It's a very specific type of creativity
that you create unique experiences based on yours and experience of others.
It's unmatched in developing and further exploring this specific type of creativity.
And it's tremendously helpful for public speaking as well that once you are able to make a complete fool out of yourself on
an improv stage there is nothing that anybody at the conference can throw at you and to throw you
off your game basically it's kind of like doing balance exercises for your mind because you get used to getting off balance and then finding a
position again that you can talk about.
I think improv's magical and I've always wanted to do more of it.
Oh, do you do it as well?
No, not really.
I mean, we started to take a class in college and the timing didn't work out,
but I kind of want to go back sometime, someday.
It's one of those things.
All right, we should let you get on with your weekend.
Maria, do you have any thoughts you'd like to leave us with?
Well, so the closing thoughts on my side is that with pervasive systems, with embedded systems, we are at the point where we are only, it's the tip of the iceberg for what is coming. beginning of a very exciting, a very evolving part where embedded will truly become a part
of our everyday lives, of all of our everyday experiences.
And I am extremely excited about the promise of this field.
And I'm also very excited at what we can do to start informing other fields as well. Once we do more in digitizing our experiences,
we really can start impacting things like,
for example, studies of organizational dynamics or economics.
Once we can have data about humans
and about humans in everyday environments,
we can do better in so many more other fields in life.
So I see this as one of the potentially transformative areas of the space,
something that we cannot do quite yet right now,
given that our pervasive deployments are still fairly limited comparatively.
But once we enable the true smart cities and smart environments,
this will be truly transformative for many other areas of our lives as well.
Yeah, cool.
Our guest has been Maria Gorlatova,
Associate Research Scholar at Princeton University's electrical engineering department.
Maria, thank you for being with us.
Thanks for having me.
Thank you also to Christopher for producing and co-hosting.
Thank you very, very much to our Patreon subscribers.
Thank you for letting me send a microphone to Maria.
It just really makes my
life easier. So I really appreciate that. And of course, to the rest of you, whether you are
subscribers for Patreon or not, thank you for listening. I think our quote this week is going
to come from E.L. Doctorow. Is that Corey Doctorow? No, it's some other Doctorow. Okay, from the sky.
Writing is like driving at night in the fog.
You can only see as far as your headlights.
But you can make the whole trip that way.
Embedded is an independently produced radio show
that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.