Orchestrate all the Things - The next 10 years in AI: From bits to things, from big data in the lab to expert knowledge in the field. Featuring Landing AI Founder Andrew Ng
Episode Date: March 21, 2022Did you ever feel you've had enough of your current line of work, and wanted to shift gears? If you have, you're definitely not alone. Besides the Great Resignation, however, there are also less... radical approaches, like the one Andrew Ng is taking. Ng is among the most prominent figures in AI. Founder of deeplearning.ai, Co-Chairman and Co-Founder of Coursera, and Adjunct Professor at Stanford University. He was also Chief Scientist at Baidu Inc., and Founder & Lead for the Google Brain Project. Yet, his current priority has shifted -- from bits to things, as he puts it. Andrew Ng is also the Founder & CEO of Landing AI, a startup working on facilitating the adoption of AI in manufacturing since 2017. This effort has apparently contributed in shaping Ng's perception of what it takes to get AI to work beyond Big Tech, in what he calls the data-centric approach. We connected with Ng to discuss the data-centric approach to AI, and how it relates to his work with Landing AI and the big picture of AI today. Article published on VentureBeat
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amadiotis and we'll be connecting the dots together.
Did you ever feel you've had enough of your current line of work and wanted to shift gears?
If you have, you're definitely not alone.
Besides the great resignation, however, there are also less radical approaches,
like the one Andrew Eng is taking.
Eng is among the most prominent figures in AI, founder of Deep Learning
AI, co-chairman and co-founder of Coursera, and adjunct professor at Stanford University.
He was also chief scientist at Baidu and founder and lead for the Google Brain Project.
Yet his current priority has shifted from bits to things as he puts it. Andrew Ng is also the
founder and CEO of Landing AI,
a startup working on facilitating the adoption of AI in manufacturing since 2017. This effort
has apparently contributed in shaping Eng's perception of what it takes to get AI to work
beyond big tech, in what he calls the data-centric approach. We connected with Eng to discuss the
data-centric approach to AI and how it to discuss the data-centric approach to AI
and how it relates to his work with landing AI and the big picture of AI today.
I hope you will enjoy the podcast. If you like my work, you can follow Link Data Orchestration
on Twitter, LinkedIn, and Facebook. Typically, the way I start my conversations with people is
I ask them to share a few words about themselves and their
backgrounds and how they got to be in the place that they're at the moment and so on. In your
case, I thought I would make an exception because, well, these things, I guess, are pretty much known
at least to me, partially, and obviously, I'm assuming that there must be known to many people who may be listening
as well. So instead, I thought I would start by taking that for granted, basically, and asking you
something a bit different. So you wear a lot of hats, you do a lot of things, you're involved in
many efforts, and someone in your place, I imagine, must have what I would call artistic liberty in a way.
So you're probably free to pursue whatever motivates you the most.
And so in that line of thought, I would like to ask you to share with me and people who may be listening,
what motivated you to start learning AI?
I mean, you have a number of other engagements going on.
You have a professorship in Stanford, which is still active as far as I know.
You have co-founded Coursera and Google Brain.
And I wonder, by the way, if you're still in some way actively involved on those fronts.
And so, yeah, just a little bit of overview of what it is that the balls you're juggling
at the moment and why did you choose to prioritize landing AI?
Sure.
So, you know, after having started and led the Google Brain team some time back and also
ran AI for Baidu, largest web search engine in China, I saw the rapid rise of AI in consumer
software internet. And I had started learning AI because I wanted to take this amazing technology
that was working really well in consumer software internet companies and take it to other industries.
And frankly, it has proved much harder than I thought when I started the company.
When I started Landing AI, we wound up doing a lot, frankly, too much consulting work.
And it was through working on many customer projects, including many manufacturing projects,
that Landing AI started to develop the new toolkit and the new playbook for making AI work in manufacturing and industrial automation.
And so that's what we're focused on today, building a product called Landing Lens that
makes it fast and easy for our customers in manufacturing and industrial automation to
build and deploy visual inspection systems.
So happy to share with you also some of the unique technologies that we had to invent
and why the playbook for AI
adoption and consumer software internet did not work for these other industries and why we had
to come up with new ways to approach the problem. Okay, so it was industry-driven then. May I then
follow up with asking you, so why did you choose to prioritize to focus on those industries specifically?
I think that having worked on consumer software internet, I wanted to work in manufacturing. I think it's one of those great industries that has a huge impact on everyone's lives, but it is so invisible to many of us.
Many countries, many developed economies have been lamenting the decline of manufacturing
in their countries,
certainly here in the United States.
And I felt that there's an opportunity
to take this AI technology
that has transformed internet businesses
and help all the people working
in this other industry of manufacturing as well.
I think there was also something that appeared to me
from working on maybe bits
to working on much more physical things.
And so visiting the manufacturing plants
to make semiconductor chips,
to smartphones, to automotives,
that these giant factories all around the planet,
much bigger, much more impressive
than most people imagine
that most people have never set foot in their lives before.
And I was excited to take AI technologies
to make these things work even better.
Okay, thank you.
I think you mentioned at some point
that part of what you're trying to achieve
with landing AI is actually, well, operationalizing, let's say, or productizing a specific approach that you're taking.
And I think that much of that is centered around the notion of data-centered AI, which to quickly reiterate is basically the idea that, well, by now we do have models, machine learning models that are sufficiently developed
so it makes more sense to actually focus on the data.
If you think of it as a system with moving parts, let's
fix the models and then focus on the data and try
to work on those. And I wanted to ask you,
first of all, it's an idea that seems to be getting traction.
I know that many researchers
and many practitioners are also adopting.
I think Chris Ray is also advocating
for something very, very similar.
And so I wanted to ask you,
how hard has it turned out to be
to apply that in real-world use cases, in manufacturing specifically, in which you're focusing?
And I think that having read a little bit about the way that landing AI approaches that, it looks to me like it's a mix, basically.
So part of it is obviously the models and the product that helps people connect their data sources.
But it looks like services and consulting and engaging with people on the ground is also in the mix.
And I wanted to ask if my impression is correct.
And if yes, to what degree, basically?
So how much does Landing AI have to engage with people
and how much is up to the platform?
Yeah, actually, we're a platform product company,
not a consulting company.
So when we work with customers,
we do often have to provide a little bit of training
and a little bit of guidance on how to use our platform
because the Landing Lens platform
embodies a lot of
the best practices in data-centric AI that we have developed.
But there is a challenge in that techniques for engineering the data fed to the AI system
is a fairly new technology area where I think a lot of the tools we're inventing did not exist before.
And so we do have to provide some training to our customers
on how to engineer the data using our platform
in order to get the result they want.
But I would say we're purely a product company
that has to provide a little bit of training
and our goal is not to do consulting work.
In fact, I think that the consulting work is one challenge that I've seen a lot of
AI companies face and this is why we wanted to build the LendingLens platform.
We realized that in consumer software internet, you can build one monolithic AI system to serve 100 million or a billion users and create a lot of value that way.
But in manufacturing, every plant makes something different.
And so every manufacturing plant needs a custom AI model, needs a custom AI system that is trained on their data.
And the challenge that a lot of companies in the AI world face is,
how can you help, say, 10,000 manufacturing plants build 10,000 custom AI systems unless you become a consulting company to do these one at a time?
So what Lambda AI has done in the last year and a half, two years,
is put most of our focus on building a fast and easily used platform
that makes it possible for our customers to do the customization.
And that is really core to our business model,
that we don't want to be the ones to do the customization work.
We instead help our customers, or in some cases,
the systems integrators,
do the customization work.
Okay, to be honest with you, I was more thinking, well, obviously, consulting and in terms of
doing things such as customization, like you mentioned, could be and maybe is to some extent
part of the mix.
But I was more thinking about, well, evangelizing and educating, if you will.
So advocating for the data-centric approach and whether you find that people find it natural
or you have to fight like an uphill battle to convince them like this is the way to go.
I will say as a mixed bag, it's been interesting talking about and evangelizing the
data-centric approach to AI. When I talk about data-centric AI, I'm often reminded of when about
15 years ago, some friends and I started to talk about deep learning. So the reactions we got back
then and the reactions I'm getting today is some mix of, I've known this all along, there's nothing new here,
all the way to this could never work.
To some people, but then also some people that say, yes, this is, I've been feeling
like the industry needs this, and this is a great direction.
So we're getting all of those reactions, depending on who we talk to and their experience
with building AI systems.
There is one thing.
I was surprised.
It was just about a year ago, I think March 24th of 2021, that I started talking about
data-centric AI.
There's a video on YouTube when I gave that talk on MLOps data-centric AI.
I have been very surprised, and many of. I have been very surprised and many of my friends
have been very surprised at how rapidly the global data-centric AI movement has taken off.
So that gives me confidence that I think we are onto something that would just identify the problem
that a lot of people, not just us, have been feeling for a long time. And we are crystallizing the specification of the problem
and then also building tools to make it possible
for a lot more people to build AI systems
in this way to solve their own applications.
Yeah, I mean, sharing my own personal experience here,
it sort of directly resonated with me.
It was, I guess, like someone put a name to a notion
that was there already. So in that sense, I think
that the reaction you described is probably justified.
That's great. I hope you help us to
spread the word then about the data-centric AI movement, George.
Well, I kind of have been doing
that in my own way and to the extent possible, let's say. That's great. So moving on then to
models actually from data. You already kind of touched upon models when you said that, well,
in manufacturing things are different and how they work that, well, in manufacturing, things are different
and how they work for internet companies, for example, because every factory is different
and therefore their AI systems have to be different.
And I guess the fact that you are targeting the industries that you are, so manufacturing,
led to the kind of specialization within machine
learning that you're taking, focusing on computer vision. And this has to do with the fact that
apparently computer vision is more relevant on factory floors than anything else, than
natural language processing, for example, or anything else. So I wanted to ask you then, I know that you have been advocating for an approach
similar to the one that people are taking for NLP. So in NLP, we now have what people call
foundational models. So those big multi-billion or even trillion parameter models that then people use as the basis to customize and tune
to their specific needs and specific domains.
So do you see that happening anytime soon for a computer vision as well?
And is that something that landing AI and you personally are involved or feel you should
be involved in?
Yeah, so I think it will happen.
There are multiple research groups working on building foundation models
for computer vision.
So I'm confident that we'll see more and more of that work emerge.
And then I think that there's actually one other thing, though.
I think that with better foundation models in computer vision,
it will reduce
the amount of data needed
for a specific
manufacturing application,
but it's also not panacea.
It won't completely
solve the problem either.
And the reason is
it's been interesting
to discover
how every manufacturing plant
has a different standard
for what is a defect or not.
Or even take welding.
There are different professional associations on welding
in different countries and across the world.
And different professional associations
have different standards for what is a good weld
and what is a bad weld.
I learned this working with one of our partners
at Stanley Black & Decker.
But so I think that what this means is that even with foundation models,
every manufacturing plant will still need customization.
And I think that tools like LandingEye's LandingLens will be a big piece of that.
Even if foundation models is another technology,
it will just make it a little bit better for landing lens
and for other computer vision applications.
I presume that even though at the moment,
to the best of my knowledge at least,
there is no such thing for computer vision,
you may probably be applying something,
a similar approach, let's say, for your platform.
So I presume you probably are starting with a base model and then fine-tuning and customizing that,
as opposed to starting from scratch, right?
Yes. I think foundation models, they're a matter of scale.
It's not that one day is not a foundation model,
but the next day it is.
So in the case of NLP,
we saw a development of models
starting from the BERT model at Google,
transformer model,
then GPT-2, then GPT-3.
And it was a sequence
of increasingly large models
trained on more and more data
that then led people to then call some of these emerging models, foundation models.
So I think that we'll see something similar in computer vision.
Many people have been pre-training on ImageNet for many years now, probably not less than a decade, I guess.
But for many years, people have been pre-training on ImageNet.
And so I think the gradual trend will be to pre-train on larger and larger datasets,
increasingly on unlabeled datasets rather than just labeled datasets, and increasingly a little bit more on video rather than just on images. So what I think will happen is just like an NLP,
there was a gradual progression of improvements.
What I'm seeing is in computer vision and machine vision, there's also that gradual improvement in the quality of the pre-trained models.
And then I think that to the public, you know, a lot of things like gradual improvements when you're an insider, it seems to others like it came out of nowhere.
But I think that as an insider to machine vision, I'm already seeing those things.
And then I think at some point, the public will declare it to be a foundation model.
But I can't predict exactly when the public will make that declaration.
Yeah, I think you're right. That's how it works when it comes to people who are
deeply involved in a discipline and then the outsiders. To the outsiders, it's probably
when people like myself go out and declare, well, okay, we have a foundation model or
something along those lines. But to people who are actually engaged in working on that day by day, it's more like
a gradual shift, as you mentioned. There's one very interesting trend I've noticed, which is that
I've seen in my career many technologies that were improving rapidly year after year.
Let's say there's a technology that gets 60% better year after year. So to an insider, every year is just 60% better than the last year.
What's the big deal?
But that is exponential growth.
And so to people looking from the outside,
it looks like it suddenly came out of nowhere.
So I've experienced this in my life a few times
where I thought we just got better than last year,
same as every other year for a long time,
but suddenly the public felt like it suddenly came out of nowhere.
So that's been a very interesting feeling.
If you stop looking for long enough and then you start looking again, you're like, whoa,
what happened here?
But of course, if your gaze is fixed, it's like, well, just steady progress, as you mentioned.
I wanted to follow up on that by asking you well on your opinion on the
potential ways of getting there let's say and I want to connect with something from the domain of
natural language processing. One of the most interesting conversations I've had around that
was a few years ago with David Talbot who used to be the former machine translation lead at Google.
And we were discussing with him, well, the current state of the art.
It was 2017, actually, but we were discussing the current state of the art
in natural language processing and machine translation.
And he said something very interesting. He said that applying domain knowledge in the form of linguistics for his field,
in addition to machine learning and deep learning, made lots of sense.
And he felt that that was the way forward for NLP.
As far as I know, of course, this is not really applied in what we currently call foundation models for NLP. As far as I know, of course, this is not really applied in what we currently call
foundation models for NLP. I may be wrong, but I don't really see it much. I wanted to ask,
do you think that this approach makes sense? And more specifically, do you think that it could
also make sense for computer vision? You know, it's a complicated question,
so please permit me to give a complicated answer.
So the trend I've seen in applications
where you have a lot of data,
including the ways there was foundation models
and NLP were built,
is that over time, with bigger datasets
and more sophisticated learning algorithms,
the need, the amount of domain knowledge injected into
system has gone down over time. But with important caveat that this is true only for the problems
with very large data sets, which is not all problems. So in the case of, actually, I remember
in the early days of deep learning, both computer vision and NLP, because deep learning wasn't working that well.
We would routinely create a small deep learning model and then combine it with more traditional
domain knowledge-based approaches. But as the models got bigger and as we fed more data,
less and less domain knowledge was injected and we just tended to have a learning algorithm view of a huge amount of data, which is why machine translation eventually demonstrated that
end-to-end pure deep learning approaches could work quite well when you have pairs of languages
where you have a very large amount of data. Now there's one big caveat to this though,
which is everything I just said applies only to applications where you
do have a lot of data. And this is often not true in manufacturing settings. If you go to a
manufacturing plant, if you go to a smartphone manufacturing plant, hopefully they have not
manufactured a lot of scratched or dented smartphones. And so they will not have a million
pictures of scratched smartphones
because they only made 10 scratched smartphones
and eight dented ones.
And so when you have relatively small data sets,
then domain knowledge does become important.
I think of AI systems
as having two sources of knowledge.
There's knowledge from the data
and knowledge from the human expert.
And we have a lot of data,
then the AI will rely more on that
and less on the knowledge from the human.
But for problems where there's very little data,
which is the case in manufacturing
and many other application settings,
then you do have to rely heavily
on the domain knowledge from the human.
And then the technical approach has to be,
how do you build tools to let the expert specify,
express the knowledge that is in their brain about what is the defect in
the manufacturing plant? Or, or, or, you know,
how do you read an electronic health record?
Or how do you do these other tasks where you just don't have a lot of data?
Okay.
Yeah.
Thank you.
And well, following up on that, I wanted to get your opinion on what do you think is the
best way to actually do that in the cases where it does make sense?
So expressing domain knowledge.
And I'm going to quote here from the data-centric approach, where it's a quote that says that
it needs to empower customers
to build their own models by giving them tools to engineer the data and express their domain
knowledge. And to me, this seems to point to approaches like, well, using more metadata,
having solid data governance in place, and well, good old- fashioned AI in a way, symbolic AI approaches.
And there's been a trend, let's say, towards well, marrying those two worlds, whether some people call it neuro-symbolic AI, some others call it robust AI or hybrid AI or whatever
it is you want to call it.
But by any name, I think they're all pointing to the same direction.
So I wanted to ask, what's your opinion on your opinion on the big picture, let's say,
and then how does that translate to reality on the ground?
So how does landing AI help customers express their domain knowledge
in the cases where it makes sense to do that?
Yeah, so a lot of landing AI tools are designed around helping customers to find the most useful examples to label, create the most consistent possible labels, and then also keep on improving the quality of the data, both the images and the labels that is fed into the learning algorithm.
There's something that you said. I don't know that I agree that
neurosymbolic AI is a massive trend. I know that there's an enthusiastic research committee
working on it. And sometimes we hear from that committee, you know, seemingly quite a lot on
Twitter. I think it's a great subject to be working on. But as a percentage of all AI applications,
I think what I'm seeing is that there's great research
and people should absolutely keep on working on that research,
but as a fraction of all AI applications, it's still small,
even if it has potential.
But I do think that in the short term, one of the best ways
for domain experts to express their knowledge
is to create a data set that clearly shows the AI what they mean. So in the case of, again,
using manufacturing as an example, it turns out that only an expert knows how is the scratch on
the smartphone defined, right? Because it's a very shallow scratch. Is it okay or not?
Is the tiny scratch, is it okay or not? When is it the scuff mark and when is it scratch?
These are actually surprisingly difficult distinctions that are very difficult for
people other than the experts on the platform to make. And so what Landing AI does is we provide
a set of tools to let these experts label data so that they can very clearly show an AI system.
These are examples of structures.
These are examples of dense.
And these are examples of things that are not structures and dense.
And that clarity of the data is a large part of how we hope
them engineer the data.
You know, I've been in the – I've walked to the manufacturing plants where I speak with one of the experts, expert inspectors,
and we show him a plastic part and he will say, this is clearly a defect.
We show it to a different inspector and they go, no, this is clearly not a defect.
And so what I found is even in manufacturing plants with expert inspectors, there are inspectors that sometimes
have been disagreeing with each other for years without necessarily knowing it. And so, Lanning
AI's unique tools and workflow is very good at helping these inspectors very quickly realize
where they agree, so let's not waste time on that, and where they disagree, so that they can together
hash out what is the definition of a DeFi they want
to drive consistency throughout the specification
and therefore the data, which in turn turns out to be critical
for getting an AI system to get good performance quickly.
So this is one example of the things that the landing lens tool does
that enables customers to engineer the data efficiently.
What you just described to someone like me who has background in, well, symbolic AI,
you could say, so semantics and things like ontologists, this is like the typical issue
you get when you get basically two experts in a room.
They may agree on some things and they
also will disagree on a number of other things. So achieving so-called semantic reconciliation
is a very hard thing to do, as I'm sure you've found out in your own domain. And I wanted to
take that opportunity to ask you on your opinion on something else, which I kind of see as a foundational technology as well,
and maybe in a way connecting those different threads
that we've touched upon so far.
And that would be graphs.
And the reason why I see it as a foundational technology
is that when you talk about graphs as data structures,
as I'm sure you're aware of, there's a kind of,
there's a trend again going on in machine learning at this point with graph machine learning which
is showing very good results and the main reason for that is that it allows people to express
more information and use that in their model, sort of bootstrap them, let's say. And we also see
what I call the revitalization of knowledge graphs and semantics. So again, there's lots
of traction in the industry and obviously lots of hype around that. And to me and to people who
have been familiar with that field, knowledge graphs are specifically about semantics, really,
and how to express, well, different views of the world, basically, and how to reconcile them. So
I wanted to ask you, well, what do you think of this whole wide-ranging domain and whether
it's something that you are applying in some way, or do you see maybe yourselves applying
eventually at some point?
Yeah, let's see. So I agree. Knowledge drops is one of those important technologies
that has a huge impact, a huge commercial impact, and where the application in multiple
large companies such as the web search engines has a huge impact. but until recently, academia has not paid nearly as much attention
to knowledge drafts as is commensurate
with its current commercial impact.
And then I think, let's see, what do I say?
I think that knowledge drafts has been very important
for categorical data, for mapping relations between entities,
for NLP data, for web search queries, for search.
Landing AI has not focused as much on knowledge draws
because we've been more focused on computer vision.
And I think hopefully there'll be more advanced technology
on knowledge draws for images,
but I think it's inroads into
images is still a little bit earlier.
And then I think the other technology you mentioned, kind of draft neural networks,
draft databases and draft neural networks, that is actually another exciting emerging
technology where I'm seeing a small but growing set of commercial applications.
Again, that technology, which landing AI did experiments with,
I see it more used for structured data,
structured data, NLP data,
reasoning between entities.
So I see applications in retail,
telecommunications, security.
I have to admit,
I've seen fewer applications of that
in computer vision applications.
So we've been less focused on that technology. fewer applications of that in computer vision applications.
So we've been less focused on that technology.
Maybe as we build more sophisticated tools
for the reasoning about the metadata of the manufacturing
data, maybe we'll end up leaning more into draft databases
and knowledge drafts.
I admit that's not been a focus of what we've been doing so far.
But hopefully...
Earlier in the conversation,
you did mention that you potentially see,
well, both landing AI, I think,
and computer vision in general,
kind of expanding, let's say, to video as well, to include video and
sort of integrating that in the model.
So another sort of trend that's emerging is so-called multimodal AI.
And I think that would fall under that umbrella.
So I wanted to ask your opinion on that and actually also take the opportunity to
sort of reinsert let's say graph in this conversation. There's something called
a research effort actually from Stanford which is called 3D scene graph which is called which is
used to instill semantics in images and video and sort of try to express what's going on in certain images by using
graph structures. I don't know if you're aware of that. It looked pretty interesting to me and
perhaps relevant for this multi-modal AI effort. So I wanted to ask your take on that. Yeah, let me see.
Yes, 3D scene draft.
I feel like this is some of Silvio Savarese's work, right?
I'm aware of it.
I have to say I'm not an expert in it.
I've heard Silvio talk about this work and thought it was a very interesting way
to reason about spatial relationships
between different things in a 3D scene.
Sorry, there was a lot in that question.
No worries.
I was just curious whether you've heard of it
and whether you have an opinion on that.
But if not, we can just skip and return to the main focus,
which was on multimodal AI.
Yeah, yes.
It's been interesting.
Over the past year, I've been seeing more research on multimodal AI and innovative approaches to combine different forms of input.
I think that's what happened in deep learning,
which is over the last decade, there was so much to be done just building algorithms for
a single modality, because we could do so much better just look at images, or we could do so
much better just looking at text, that for a long time, researchers were very busy even building
unimodal algorithms. But now that the AI community is much bigger,
I think the community collectively has been focusing more attention on multimodal AI.
So that has been an interesting trend. Sorry, I'm not sure if I want to answer your question.
Yeah, it does. Thank you. I wanted to kind of shift gears here.
So far, we've been mostly focusing on the software side of things.
And well, for good reason.
This is where a lot of the action is happening.
And this is where you're mostly active yourself.
However, having been one of the first people that, well,
tried to leverage the existing hardware by using GPUs a few years back.
And I've seen you express the opinion that, well, obviously,
hardware is very, very important in enabling what the models can do
and what the amount of data you can work with and what you can do with them and so on.
So I wanted to ask you if you're familiar with the current scene in AI hardware, AI
chips, as we colloquially call them, and what you think of that.
So where do you see the most promise, basically?
Oh, you know, I don't know.
I've been following the development of AI chips, but I am more knowledgeable about software
than hardware.
So I feel like the leading semiconductor
manufacturers, certainly including NVIDIA, AMD, and Intel, are continuing to pour massive amounts
of resources. And I think it is also exciting that there are many startups trying to build
their own AI chips. I think that the competition will be good for the industry and may the best team and the best technology win.
I do think that the AI world has bifurcated though.
I think that if someone can get us 10 times more computation,
we'll find a way to use it up.
And having said that, there are also many applications where the data sizes are small.
In manufacturing, sometimes 50 images is all the data that exists in the world.
And so there, you still want to process the 50 images faster, but the compute requirements are actually quite different,
which is why a lot of the focus of AI over the last decade
was on big data.
Let's take giant data sets and train even bigger neural networks on them.
And in fact, I helped to promote that movement over the years.
While there's progress still to be made in that big models and big data, I think a lot
of AI's attention also needs to shift towards small
data because there are also a lot of applications where only 50 images in the world exist and
you just have to find a way to get it to work with 50 images or else it won't work.
Okay, thank you.
Well, there's a ton of other directions that we could pursue,
but I think we're almost out of time.
And I also think that, well, this is a nice,
well, epilogue because it kind of goes full circle
to where we started from.
So I guess we can wrap up here.
And thank you very much for your time
and well, extending your day
to accommodate this conversation.
Yeah, no problem. Thanks again. Thanks to get up early to accommodate this conversation. Yeah, no problem.
Thanks again.
Thanks again, Early, to meet with me.
And I think that, you know, 10 years ago,
I think the biggest trend in AI was the shift to deep learning.
If I just leave you with one thought,
I think that the biggest trend in AI now,
the biggest shift that AI needs to make now
is a shift to data-centric AI.
And just like 10 years ago, well, 10 years ago, I actually underestimated the amount of work that we needed
to flesh out deep learning to reach its full potential. I think a lot of people today still
are underestimating the amount of work and the amount of innovation and creativity and tools and work that will be needed to flesh out data-centric AI to its full potential.
But collectively, we all make progress on this over the next few years.
I think we'll enable a lot more applications of AI.
So I'm very excited about that.
Well, we can check back again in 10 years and see what progress has been made.
And well, hopefully you and I may stay connected
and check up even before that.
Yes, I will see you in 10 years,
hopefully before that.
Great.
I hope you enjoyed the podcast.
If you like my work,
you can follow Link Data Orchestration
on Twitter, LinkedIn, and Facebook.