In The Arena by TechArena - Inside the Living Edge Lab: Drones, AI, and Edge Innovation
Episode Date: February 10, 2025Jim Blakley, of Carnegie Mellon University’s Living Edge Lab, shares insights on edge computing, AI-driven drones, private 5G, industry partnerships, and real-time innovation....
Transcript
Discussion (0)
Welcome to the Tech Arena, featuring authentic discussions between tech's leading innovators
and our host, Allison Klein.
Now let's step into the arena.
Welcome in the arena.
My name is Alison Klein.
Today we have a fantastic interview lined up.
I've got Jim Blakely, Associate Director of the Living Edge Lab at Carnegie Mellon University
with me.
Jim, you and I have known each other for a long time, but this is the first time you're
on the Tech Arena podcast.
Welcome to the program.
Thanks, Alison.
It's good to be back, I guess I should say, at least in this new context.
Jim, why don't you give a little bit of introduction about you and what you're
doing at Carnegie Mellon and in your history in computing.
Sure. Yeah. I've been in the tech industry a very, very long time and I'm getting close
to my retirement time zone. And I spent a long time in the communications part of the business
in Bell Labs and AT&T.
And after that, I moved over to Intel where I did some communications work and communications
network work, but then focused more on cloud and visual computing. Before I retired from Intel in
2017 and moved over to do things that are maybe a little closer to my roots in academia and research
at Carnegie Mellon.
I was going to say you've been leading the Living Edge Lab, which is a very cool name for a lab,
but why don't you talk about the charter of that lab at Carnegie Mellon and how does that relate
to innovation and technology? The Living Edge Lab is under Professor Satya Narayan and he is often considered the founding
father or the godfather of edge computing, having been at the lead of it with some papers
and work that he did back in 2009.
And if you recall, 2009 was just a couple of years after AWS launched.
So most of the world was rushing towards cloud
at the time he was rushing towards the edge.
So we like to think of ourselves as one of the oldest
and the leading academic organization
focused on research and edge computing.
And for us, that's a focus on applications.
What sorts of applications benefit from edge computing
as opposed to just being a normal,
either client application or a cloud application.
What sort of tools and frameworks do you need in order to be able to
develop and deploy those kinds of applications?
And what sorts of infrastructure do you need within networks and within
data centers in order to support delivery of applications for edge computing?
A particular focus in that space is around how do you build infrastructure that will meet the latency requirements of edge computing.
Now, within that purview, I was reading about the type of work that the team takes on.
It's extending across so many topics that I'm interested in. But let's start with
drones. What do you see is the future of drone technology and how has Carnegie Mellon approached
this in the lab?
Drones are used in a lot of different contexts and they come in small forms and they come
in large forms. For us, the focus has been around the really small drones. So you think
about the typical hobbyist drones
or the guy who inspected the house
is the last house I bought,
used a drone to fly up and inspect the roof.
That kind of thing that are relatively inexpensive,
they're very lightweight
and make for a nice platform for applications.
They also have a characteristic
that particularly in the US really matters.
They're lightweight means you can fly them over populated areas.
The majority of drones that are much bigger than that can't without special permission
fly over populated areas.
It's just a safety hazard.
But the problem with that weight limit is that it means that the amount of computing
power that you can put on the drone is very limited.
These drones typically weigh on the order of 250
grams and a typical smartphone can be on the order of another hundred grams. So they can't really
carry any computing. That's what we would think of as a baseline of computing right now with
smartphones. They can carry smaller stuff and they have cameras and they can do some level of things
at that weight, but really they need help in order to do anything very sophisticated in either AI or in a
lot of what we're looking at is how do you make them capable to be able to fly
autonomously?
So, you know, you can give them a mission and they go off and complete that
mission and then come back without a human pilot having to get involved.
So that's a lot of the focus that we have is in that area using edge
computing to supplement the computing power that's on the drone with something
that's much more capable that you have, but can do it at latencies that don't
involve going across the backbone networks and the internet in order to
get to a cloud data center.
When you think about this type of technology and other technologies at the
edge, one thing that I think about is integration of AI into these use cases.
And I know you're working in that space as well.
I was looking at writings and saw something called survival critical machine
learning models.
Can you talk to me about that and what that means?
Yeah, so if you think about the way AI gets deployed today
in most environments, they'll go out and collect
a training set of some sort, depending on what the application is.
They will label that data set.
They will work in a laboratory environment and create
a machine learning model that they then
deploy out into a field environment of some kind. That's a normal workflow process.
So we're interested in what we refer to as live learning. And that is
during the course of the activity, we call it a mission. During the course of a mission,
you need to be able to get smarter about where you are. So there may be something that suddenly appears in your field of view.
A lot of our work is focused on computer vision kinds of applications.
Although we have done some work in radar over the last year, but something
appears in your field of view.
You don't know what it is, but you know that it's unique and you want to be able
to recognize that as you go forward.
So we have a project that we've had for a couple of years called Hawks that is
focused on enabling what we refer to as scouts, these entities that are out
running the mission to go and collect data, ship it back at low bandwidth, only
the right stuff to look at, back to somebody at a more centralized place that
can decide whether it is an object of interest
and what it is and train a model, update it in the field in real time.
That's what we refer to as live learning, doing that in the field.
Survival critical machine learning is the very important task of doing this while your
scouts could be under some sort of threat.
So if you aren't really good at identifying the
threats that are coming at you, you may never complete your mission. So that's why critical
machine learning is about learning quick enough. And what does it mean to be able to deploy machine
learning in live environments fast enough and in the right priorities to be able to
accomplish an entire mission, to stay alive long enough to get to the end of the mission.
So that's the focus.
You can imagine that there are military applications for that, but there are also ones where you
might have animals that are threatening you or some other kind of threat that's out there
that you need to be able to quickly identify and address before you succumb to them.
And that really speaks to the importance of compute intensity at the edge to be able to
do that in real time.
Where do you see compute capability being in terms of the core capability of actually
delivering that today?
Is that something that is being deployed today or do you think the industry is still working
on that core capability?
I think it's true of all AI technologies,
machine learning technologies right now,
is that they're all on a rapid improvement cycle,
and it gets better every year.
We think of ourselves in general
as a user of machine learning technologies.
We're not trying to become machine learning experts
as much as figure out how can you deploy
some of these real world applications as quickly as possible and what are the gaps.
So I think in these sorts of gaps, these sorts of applications that I was talking about,
it's as much about the system level frameworks that allow you to do this effectively when
you have potentially multiple scouts going all out at the same time and they're in a
mission where they need to coordinate
with humans back somewhere,
and you have to manage that in real time.
So the base technologies,
the computer vision technologies,
have gotten pretty advanced.
They have faults as well, but they've gotten pretty advanced.
But putting that into a capability
where you can do something like this in real time
is still very challenging, but that's why we like it.
I think that one of the most important things, and we're getting ready to go to Mobile World Congress later this quarter, is reliable and performant networks within these edge environments.
And one of the things that piqued my interest was your deployment of a successful private 5G
implementation. And I think that's fantastic. We've seen some limited uptake in private 5G adoption.
What did you learn from building one?
And what do you think is the state of that technology
in terms of broad scale application?
Yeah, that's a great question.
And I will preface it by saying that we started
on private networks back in, actually in 2017,
we built our first LTE network.
That one didn't last very long because it's a technology and it's sort of got to work
for it.
And we built one of the first private LTE CBRS networks, CBRS being the US standard
of frequency bands for unlicensed spectrum use by anybody who wants to use it. That is a huge enabler of being able to do private networks in the U.S.
And there are other countries that have not identical, but similar
sorts of licensing schemes.
So that was a big barrier for a long time was getting the spectrum to be able to do
something like this, particularly in outdoor environments, which is our focus.
So we built that LTE network.
We learned a lot in that process.
And when we got to 5G,
we applied a lot of those lessons,
but we learned a lot more.
One of the biggest challenges in both deployments
and why we waited till 2024 to do our 5G network
was what I call solution completeness.
It's all well and good to say we have a 5G radio,
but there are all sorts of other components
like the core and the end devices and even simple things like the
SIM cards that go on the phone have to be 5G capable and have to be able to be
deployed in a private environment versus by a carrier.
So connecting all those dots and making all the pieces of that solution work
together was probably the biggest challenge.
We have stories, you know, our installers were out on the roof of an old steel mill
deploying our outdoor radio in a hundred degree heat last summer.
Then I just had to have them back now when it was sub zero to do some work at the site.
Again, just a couple of weeks ago, you have all sorts of environmental
things you're dealing with.
We had one radio unit that flooded because it was the first one off the line
and they hadn't quite seen it properly.
I could go on and on about my summer vacation, but seriously, there's a
reasonable integration challenge with this and it's something that brings me
to the main second point, which is there's an expertise question here.
A lot of these technologies that are coming out now depend on a reasonable
level of carrier communications technology that is not really widespread. question here. A lot of these technologies that are coming out now depend on a reasonable level
of carrier communications technology that is not really widespread in the industry.
And it hasn't been applied. Those people don't typically live in IT organizations that might
deploy a private network. On the same token, those people in general don't have a lot of experience
with enterprise level technologies and the capabilities
that exist to do software-based solutions. We used actually an open source 5G core called Magma that
required a fair amount of work, but it was all work in Docker and Kubernetes and more in a sort
of cloud native kinds of technologies. So bringing those two together is an expertise that doesn't exist in many places.
When you think about a lab like the LivingEdge lab, where do you see the findings from the
lab and the work that you do having broader technical innovation impact?
And can you provide some examples?
Sure.
And this is where I always say our number one product as an institution is CMU graduates.
So the big thing we do is send them off into industry and beyond. In Intel speak, that's
number one. But more specifically to what we do beyond that is we have a big focus on
two things, open source technology. We do everything open source. So not only do we
publish papers,
but we have software that backs up what we say that's accessible to everybody to go and
build on. We think that's a critical step in being able to transfer technology to industry
and to other institutions. So that's a must do on our part. It also means that it changes
the kind of research we do into things that are not proprietary kinds of things. There
are things that anybody could be able to do.
That's sort of number one.
And number two is we've had a long history of engaging with companies.
That's how I got here from Intel.
This has been an engagement between Intel and Carnegie Mellon.
And it means that we have projects that we do.
We just completed one, for example, with VMware.
They were VMware when we started and Broadcom when we finished on Edge VDI, where we worked with them
to look at what happened when you ran VDI in an Edge
environment and what performance improvements did you need
and did you get.
And that brought us also into an engagement with T-Mobile,
where we were looking at trying to help them identify
how to reduce the latency within their 5G networks
to be able to better support applications like this.
So it's that engagement with players in the industry.
And we're working with the Harmi Artificial Intelligence Center that's
based here in Pittsburgh on some of the drone work.
So we're getting feedback from as many players as we can on the work we do in
order to make sure it's finding a home, as well as we're getting feedback about important issues into our lab.
Thanks so much, Jim.
It's so cool to see what you've been able to accomplish with the team at Carnegie Mellon.
I encourage everyone who's listening to check out the LivingEdge Lab and the research publications
that you're providing to the broader tech community.
One final question, where can folks find more information about LivingEdge Lab
and connect with the team?
So the easiest way to find us is to go to LinkedIn and search for the CMU
LivingEdge Lab and follow us there.
From that site, you will find everything that we have announced recently.
And if you click on any one of them, that will take you to our website, which has our
entire set of resources and activities and news announcements and articles and papers
that we have published over the last 10 years.
So it's a very rich source of more information there.
And all the new stuff goes out on the LinkedIn site.
Awesome.
Thank you so much for your time today.
It's been a real pleasure. It's been great, Allison. Thank you so much for your time today. It's been a real pleasure.
It's been great, Allison.
Thank you so much for having me on.
Thanks for joining the Tech Arena.
Subscribe and engage at our website, thetecharena.net.
All content is copyrighted by the Tech Arena. you