Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 18: Merging Data Science and AI with Mel Greer of @Intel
Episode Date: December 22, 2020Just as data analytics transformed business intelligence so is artificial intelligence transforming data science. In this episode, Mel Greer of Intel joins Chris Grundemann and Stephen Foskett to disc...uss this transformation, which is impacting business of all sorts including Intel itself. Intel's strategy has evolved, and their hardware platforms are following, with the company developing hardware and software to serve AI-driven data analytics. The conversation then turns to the challenges of implementing unbiased AI, from explainable AI to diversity of data and thought within businesses. Hosts and Guests Mel Greer, Chief Data Scientist, Americas, at Intel. Connect with Mel on LinkedIn. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Chris Grundemann a Gigaom Analyst and VP of Client Success at Myriad360. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann Date: 12/22/2020 Tags: @SFoskett, @ChrisGrundemann, @IntelAI, @Intel
Transcript
Discussion (0)
Welcome to Utilizing AI, the podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics.
Each episode brings experts in enterprise infrastructure together to discuss applications
of artificial intelligence in today's data center.
Today, we're learning a little bit more about what Intel is doing with artificial intelligence.
First, let's meet our guest, Mel Grier.
Mel, tell us a little bit about yourself.
Sure.
Thanks, Steven.
I'm Melvin Grier.
I'm Intel's Chief Data Scientist for the Americas.
And I'm really responsible for taking all of the assets Intel has in its artificial
intelligence portfolio
and building solutions that help our customers gain insight
from all the data that they're collecting.
Excellent, I can't wait to have this conversation with you
and let's also meet the co-host for today, Chris Grendeman.
Yeah, hi there, thanks Steven.
Chris Grendeman here.
I'm a research analyst for GigaOM,
and I also work at a company called Myriad360, which is focused on cybersecurity and data
center infrastructure, and I'm really happy to dig into this conversation.
So, Mel, I think a lot of people, when they think of Intel, they think of the iconic ads and the
amazing little tune and the familiar logo, and of course, the CPUs inside everybody's
computer. But why is Intel focused on AI and data science? And how does that fit in with
Intel's future growth projections? Well, it's interesting because you may have noticed we had
a brand redo recently. We've got a new logo and a new musical tune that goes with it. And that's really
very much related to the transformation that Intel is going through. Really, we're focused on
making world-changing technology that will enrich the lives of every person on the planet.
That's a very lofty goal. The way we intend to do that is by making sure we help our customers understand how to tap
the potential of all the data that they are processing, all the data that they're aggregating
and building on. And our vision is to be the trusted advisor around which our customers
realize the potential of their data. Our ability to become a data company is extremely important
because we are changing from having data be the exhaust portion of our silicon architectures.
You know, how fast can we get it out? How low power can we get it out? And we're turning that
into the fuel portion of our innovation strategy with customers. What is the context? What's the content? What kind
of characteristics does this data have? Then we can now shape our silicon architectures and our
world-class technical capabilities to ensure that our customers continue to see Intel as an innovator
and that we provide them the world-class capabilities that Intel has in play
for artificial intelligence and data science. Our experience tells us that many of our customers,
almost all of them, are also on this similar transformational journey of turning themselves
into a data company. Oil companies want to be a data company focused on oil.
Transportation companies want to be a data company focused on oil. Transportation companies want to be a data company focused
on transportation.
And so we are absolutely involved
in helping these kinds of organizations,
no matter where they are in the Americas,
and for me in China and Americas,
help them realize that potential.
So we can actually have a beneficial impact
on every person on the planet.
That's awesome. And I mean, you know, what's interesting to me is that that concept, it's really a striking visualization, right, of data being the exhaust from silicon to data kind of being the fuel of this new era of data centricity across industries.
So I wonder, you know, how much of that is,
is hardware versus software?
Is it still a very hardware centric play
or is there a lot more wrapper around it?
Is it professional services
or maybe the broader question there, I guess is,
you know, what is Intel's AI strategy
and why is it compelling?
I'm really glad that you asked that Chris,
because all of these things are absolutely wrapped together. Our strategy really starts off with an anti-pattern, and I think you highlighted
it just now. Many people would think our strategy revolves around hardware, when in fact, our
strategy's entry point is software. Understanding the relationship between software optimization
and hardware is an extremely important factor in AI adoption.
If you were to take a look at the acquisitions
that Intel has made as the largest corporate
venture capital firm in the world in the AI space,
what you'll see over the last seven years
is that 60% of those companies have been software companies. And so our entry point
to our AI strategy really revolves around software. And it's got this major discussion
point around being able to appeal to application developers, data scientists, and programmers.
It focuses on leveraging the ISV capabilities of those software companies that Intel Capital has invested in and partnered with.
And it really evangelizes a mechanism associated
with optimization of software.
When we take software and optimize it for our silicon,
we're getting a 4 to 10x improvement in performance
and training and inference.
And for data scientists running an application in AI,
this is significant.
The second part of our strategy is about building
the best in class silicon platforms in the world.
And in this space, we're having some really great success.
We've aligned our overall AI strategy
with our OEM partners and scale and channel partners.
So as an example, we really focused on Dell and HP and others to help them align on this
strategy of software first.
And this is really a critical part to it.
One of the things that we've done is we've created what's called a fit for purpose CPU
or compute capability.
And the reason this is important is because
every workload has its own different characteristics.
And if I use three use cases as an example,
it'd be pretty clear.
Now autonomous vehicles, camera and computer vision
for predictive maintenance and analytics,
and then object and facial recognition.
These sound like AI and they in fact are,
but each workload, each use case
has a different characteristic associated with the data.
And so Intel has a family of compute capabilities
that are specifically designed to marry with workloads
to take full advantage of the characteristics
and then bring the insights
to our customers faster. And then lastly, the focus is around training, upskilling,
and reskilling the workforce. Every Intel employee is getting trained on AI. We have a baseline set
of foundational skills that every employee needs to have, because AI is one of the four
core pillars upon which this company's growth is dependent. And not only are we training our
internal employees, but we're using variations of that same curriculum to train our partners
on the OEM scale and channel side, so we have a consistent message through. So leveraging software,
building the best hardware platforms, and developing training so we can take advantage of
AIoT use cases are the three components of our strategy. I wonder if you can talk a little bit
more since we've spoken before about some of the Intel hardware platforms. I wonder if you can talk a little bit more about,
specifically what hardware platforms Intel is using
in what ways in AI.
I don't know if you have that information.
Yeah, sure, of course.
And so when we think about this fit for purpose discussion
around compute, it starts
with our Xeon processing capabilities.
Most people understand that because about 97% of the training that's going on in AI
is done on Xeon.
But then it moves from there to include many of our deep learning accelerators, things
like our field programmable gate arrays, our FPGAs.
These capabilities on the Stratix 10 and Area 10 family
provide specific capabilities around edge computing
and sensor fusion of data at the edge.
And there's a particular trend that's associated
with this edge computing and sensor fusion
that I'll talk about in a bit.
But then we also have, you know,
our application specific integrated circuits or
ASICs. These have specific capabilities designed to support graphics in deep learning applications.
Then we have this acquisition we did with Movidius. This particular accelerator in deep
learning is focused primarily on a camera vision and computer vision.
And then Mobileye, another acquisition we did, is particularly focused on autonomous vehicle workloads.
Now, that's what's in production today.
We just launched our own GPU, which is also now available because there is a small portion of workloads that really work well in
GPUs. And so Intel now has our XE GPU that's available. But when we move beyond that,
even where you look farther down the roadmap, we can see things like our Lohe Neomorphic Computing
Chip. It's under R&D right now, but it's designed to take full advantage of the research
we've been doing in neuroscience and brain research and pull the way the brain learns
into silicon. So we have synapse and neurons. Of course, the brain has over several billion,
and we have several million in our Lohi and Pokahi beach systems, but it illustrates how important it is for Intel to continue to innovate. temperatures, the ultra sensitivity to vibrations and noise, solving the problems of decoherence
and entanglement, and understanding how the possibilities associated with superposition
can help us drive even more innovations in cyber computing, healthcare, and public sector.
So we have a full complement of these compute
platforms. And these platforms are married to specific workloads. And I think if you listen to
the way that I described them, you can see that this really does represent a fit-for-purpose
compute strategy. I think that's interesting because most people, I think, might have imagined
that Intel would
have more of a general purpose compute strategy when it came to AI that essentially, again,
because the CPUs are so famous, they might think, oh, well, Intel's all about Xeon or
something like that.
But what I'm hearing from you is that's really not the case, that Intel is really focused
on delivering different applications, different
platforms for different needs. Yeah, you're right, Stephen. The reason for that is because artificial
intelligence and data science does not exist without context. There's this discussion about
baselining individual hardware platforms, but the value that Intel equates to artificial intelligence is not measured in
just flops or in power consumption, but is really measured in terms of the solutions
that our customers are trying to arrive. When we talk about being able to provide COVID-19 response
and contact tracing and being able to orient remediation to places that are hardest hit or to demographics that are
disproportionately affected. The context is in the application space, in the software space.
And so being able to bring these two capabilities together in a performance way is really, really
what artificial intelligence and data science is about.
Yeah, that's super interesting. And I definitely, you know, that interplay of artificial intelligence
and data science is really interesting to me personally. And, you know, just hearing you kind
of go through, you know, some of what you just said, but even hearkening back to the different
hardware platforms and different workloads that they apply to, it makes me wonder, you know,
inside of Intel's AI strategy, you know,
is there, are you all focused on deep learning training specifically, or you're also doing
inference? I know you mentioned the edge. So I'm just wondering, maybe you could talk a little bit
about, you know, how data science and AI play in both the training and inference and how that plays
into Intel strategy. Yeah. So for, from an industry perspective, this concept of AI is kind of an umbrella, a nomenclature.
But in reality, as a practitioner, we're really looking at analytics, right?
So this idea of descriptive, diagnostic, predictive, and prescriptive.
What happened?
Why did it happen?
What's going to happen next?
What should I be doing about it?
That's one area
that we spend significant amount of time on. And then machine learning capabilities. Now this is
about pattern matching and understanding how to do high speed velocity evaluation of risk and
probability against a pattern so we can find known knowns, known unknowns, and unknown unknowns.
And then of course deep learning capabilities. We're using convolutional neural networks
to drive a whole new set of unsupervised learning mechanisms that reveal insights that we wouldn't
have been able to plan for. All of these are wrapped into this kind contextual umbrella
called artificial intelligence. Now of course course, to a practitioner, none of these represent artificial intelligence, right?
They are not the general artificial intelligence that we see in the movies or people aspire to.
But they do represent a point of reference that our customers understand.
Yeah, I think that we're on board with that same concept that essentially, you know,
the reason that we called this utilizing AI,
and for example, that we named it AI Field Day,
was actually in recognition of the diversity of solutions
that can be categorized under that label,
not just as a way to point to a specific technology.
Because truly there are a lot of different applications
of AI, and one of the things that we've learned here
on the podcast is just how many different ways people are doing things and how many different ways AI is
getting real. You know, some of the ones that you mentioned in there, I think are going to touch
people's lives much more than the Terminator. You know, in other words, you know, you talked about,
you know, Mobileye, for example. You know, I mean, if people are going to be buying a self-driving
car, you know, that's the kind of technology that they're going to find under the hood,
whether they know they have it or not. You know, looking forward, looking at the future,
what, you know, do you think that we are going to see, what trends are going to be driving us to this new world of AI.
And how are you going to leverage those trends in order to continue the success that Intel has historically had?
So our focus on AI is really starting to morph from just data center, but from data center all the way to the edge.
So an end-to-end solution and the trend
one of the trends that's driving this transformation is a focus on moving compute
closer to where the data is this is an overriding trend that is absolutely fundamental to
understanding how iot aot 5, edge computing, sensor fusion,
all of these capabilities become real
when we figure out how to move compute
to where the data is created
instead of having to move it all the way back
to a data center.
And so what comes with that?
There's a lot of things that need to be unpacked there, right?
So one of the things is regional or leveling
of data center operations so they're not
just in one physical location.
This is about increasing the memory footprint at the edge.
So one of the key things Intel has been doing in order
to prepare for this need is to develop
optane persistent Memory. Optane
Persistent Memory is also taking advantage of a second trend, which is the one where application
developers are very focused on building applications in memory as opposed to in storage.
And so when we combine Optane Persistent Memory with FPGA at the edge, what we get is we get a supercharged
platform capability that not only helps application developers build applications
and execute them faster, but also provides an analytics engine that operates autonomously
at the edge. And this is extremely important for things that use cases for education,
use cases for health and life sciences, and use cases for public sector. The ability to move
compute to the edge and to increase the memory footprint at the edge are two key trends we're
really, really focused on. Another trend that's really important is that there are these adjacencies
that are providing obstacles to adoption of AI
that really need to be solved in a non-technical sense,
but they are very, very germane to the adoption of the technologies.
So things like explainable AI and responsible AI.
Intel's been spending a significant amount of time
inputting to the national conversation around what AI and responsible AI. Intel's been spending a significant amount of time inputting through the
national conversation around what AI and ethics means. We've also been instrumental and leaned in
significantly in the discussion around privacy. We've adopted a situation in our own company where
we're using the GDPR as a basis for understanding privacy, even if it doesn't apply in areas where we're doing business today.
It's an opportunity to differentiate ourselves from our competitors around these adjacencies.
That is also an important trend that Intel is leaning on for growth.
I think another one is that we're taking very, very seriously our responsibility in this social justice new world we find ourselves in. to increase our diversity and inclusion so that we can continue to hire the best and brightest
people independent of where they come from, who they love, what they do. These are the kind of
things that absolutely are critical factors in helping to drive AI adoption in spite of the fact
that they are adjacencies and non-technical. Well, on that point, you know, I think that that's one of those things as well,
that not a lot of people are aware of, but that Intel has been very strong in, you know, hiring,
for example, underrepresented groups and looking outside the typical Silicon Valley consensus.
And, you know, to kind of bring this to AI specifically, one of the things, one of the criticisms
that we've heard levered against AI systems
is that they tend to have a conventional mindset.
They tend to be focused on the information that you put in.
In fact, we did a episode of utilizing AI earlier in the run
where we talked about sort of the social justice
implications of garbage in, garbage out earlier in the run where we talked about sort of the social justice implement implement implications
of garbage in garbage out when it comes to AI. I definitely think that having that kind of
diversity of thought in the company and diversity of background is really going to help to create a
better product, even from a technical perspective. I think a lot of people might think that that's
something that is purely a human resources problem, but it really is more of a technical
problem for the industry if the system can't comprehend anything but a small subset of data.
So I think that that's actually a really good thing to bring up, and I'm glad that you did.
Yeah, we're spending a significant amount of time. We have Intel labs that's focused on a major set
of initiatives around explainable AI.
How do we understand some of the human centric biases
that are incorporated in the development of algorithms?
We're really working hard to,
and we've had more than 30 listening sessions
with our underrepresented teams across the board
to understand what they're facing and how their technical acumen is impeded or accelerated based
on the position Intel takes. We're extremely excited about that. We have focused on our 2030
goals for representation.
They call it so-called rise goals.
And we're extremely excited about the acceleration
of those goals throughout the corporate community.
But I think, you know, for Intel,
it's not really enough for us to be focused
on what we're doing inside our company.
I think as leaders, we really feel a responsibility
to help drive the industry. And so we've participated in a number of forums. Our CEO,
Bob Swan, is taking this very seriously, and he's engaged in a number of forums that help to
illuminate some of the issues that we've done that other people can use to help drive their success as well.
That's fantastic. It's definitely an industry-wide issue and kind of a technology
agnostic issue that it's really refreshing to hear a company like Intel taking such a
proactive stance on that. I think it has definitely related to our success in AI.
You know, there's a significant amount of work still needing to be done.
But I think what you're seeing is you're seeing a coalescence around a general set of themes, making sure we do good with AI, that we ensure that we are informing human decision makers.
We want to make sure to associate the right risk and probability
to the way algorithms work. And so these steps are helping to drive a set of,
represents a catalyst for AI adoption across our customer base.
Absolutely. You know, there's something a little bit, you know, off track from that,
that I'm curious to hear your thoughts on. I've read some articles recently, one by a gentleman named Andy Jones, and there's another one out there by Sarah Hooker. that is that basically our capabilities have exceeded what we've been able to accomplish so far,
which essentially means all the tools are there,
the hardware, the software to make a huge leap in AI
that just hasn't happened yet,
because we don't realize that we have these capabilities.
And the reason I relate that to the Sarah Hooker article
is she wrote, I think back in July,
about this idea of a hardware lottery
and the fact that a lot of research topics actually win out
just because they match up with the hardware of the time. And so obviously, you know, I know that you've moved
way beyond hardware, but Intel being, you know, one of the premier hardware providers in this space,
you know, how do you see that playing out? Is that something you pay attention to at all?
Do you see us making leaps forward soon? Do you think that the tools are there?
And how is, you know, data science and AI lining up with the hardware that's available?
I'm absolutely excited about it and when you take a look at the fit for purpose compute stack that
I talked about you can see there's some things that are in the tactical space from our core
Xeon through accelerators and ASICs but the work we're doing in neuromorphic and quantum really
represents the next phase the next generation of hardware that's really
focused on how to drive AI to the next phase. But I will bring us back to the skills that we
are associating with next generation AI. And DJ Patel, who was the first chief data scientist for
the United States government, He listed three major skills,
which we kind of line up against. The first one is the technical skills, the hardware and software
skills we've been talking about. This is table stakes, right? This is what everybody's really
supposed to have. But I think what is going to differentiate the next generation of AI adoption
and really is going to take advantage of this
ability to leapfrog and leap ahead would be these next two skills. The next one is emotional
intelligence and mindfulness. And the reason that is because it's very difficult to get a machine
to understand when something's not worth doing. And so let me use an example of that. When I start having a conversation with my wife
about something, at a certain point, even if I'm right, I know it's probably time to stop this
discussion so that I can continue to live peacefully at home. Understanding when something's
not worth doing, even if you're absolutely right, is a very difficult thing to get silicon architecture to understand.
But that's something that we as data scientists have to take very, very seriously.
We need to be able to understand that no matter how far our hardware capabilities take us, we want to be sure that we are continually being mindful. And for us, it means understanding what is unique about human contribution to work.
What is that uniqueness?
The third skill that DJ Patel mentioned was a strong moral compass. And I think it's important to recognize that, you know, one of the trends
that data scientists are starting to encounter is the need to buy liability insurance.
And the reason for this is simple. I mean, if you create something and it ends up being dead wrong
or significantly wrong, someone's going to be looking for an accountability and a way to redress from this lack of accuracy.
And so it means that data scientists are spending more time wondering whether something is right to do versus whether can they do it.
So independent of how far our technical capabilities bring us, but we still want to be clear about whether it's something that we should do versus whether it's something we can do.
And so, you know, I think DJ was on to something.
And I know when I teach my, I teach the Masters of Data Science at Johns Hopkins University.
And of course, this is one of the things we spend time talking about.
I think it's really interesting that, you know,
here we are on a podcast that is ostensibly
about enterprise technology.
And again and again, we come back to these core questions
of, you know, really humanities questions of,
you know, what's right.
And as you said, you know, what's worth doing.
I find that that tends to happen,
especially when we have data scientists as guests.
Again, I think that a lot of people might think
that a data science position is dry or...
Wait a minute, it's the sexiest job of the 21st century.
Well, apparently it is.
And I think that it's also a relevant job. And I think that that's the thing that is of the 21st century. Well, apparently it is. And I think that it's also a relevant job.
And I think that that's the thing that is really the most interesting to me is that,
you know, we're talking about basically the fundamental questions that are emerging in
the 21st century here, not just technology and applying, you know, CPUs or GPUs or ASICs
or whatever.
You know, we're really talking about core issues.
So before we wrap up, I wonder if you have anything, any last words to say to our audience,
anything that they should be looking for in the future, maybe a way that they can help contribute.
Well, certainly, I think there's an opportunity to engage Intel at a much larger level and degree.
What's beautiful about Intel is that we are fully interested in offering our thought leadership and mind share to people.
There is no charge that people have to pay to get access to Intel in order for us to share the things and lessons we've learned. So now I
would encourage a deeper discussion with us if there's questions that you have. We certainly
are very focused and interested in real life use cases and applications and so we spend
a significant amount of time doing that. And what's really nice is that you know if you work
in the airline industry you probably know what the other two or three competitors in your space are, but you may not
know what the oil and gas or retail or financial services leaders are doing. And we do. And so we
can provide that kind of cross-industry frame of reference. I think the other thing that I would
encourage practitioners to do is to get involved in things like data for democracy. I think seriously about creating the kinds of guiding principles
and ethical policies that can accelerate AI adoption. There's nothing better than having
guardrails that data scientists and programmers and application developers create,
as opposed to having people who are not in that space try and create them.
And so I think the more we as practitioners get involved in trying to describe what we
think is a way forward, I think the better the outcome is and the faster we realize the
innovation we're looking for.
I mean, I'm excited about the future of artificial intelligence. I think
Intel is going to continue to have and demonstrate a leadership role with other folks in the industry,
other partners. And so for me, it's a great opportunity to not only represent Intel,
but to be involved in a job that I have a lot of passion for.
I can tell.
And thank you so much for that.
And thank you for sharing, again, some of the stuff that's a little bit off the technology trail there.
I really appreciate it.
So where can people connect with you and follow you and your thoughts on enterprise AI and
other topics?
Well, you can always find me on LinkedIn.
I have thousands and thousands of followers.
So don't hesitate to send me a note
or connect with me on LinkedIn.
It's easy for me, easy for you to get me.
And of course, if you want to talk to me from Intel,
just call Intel.
They know where to find me too.
How about you, Chris?
The best place is on Twitter at Chris Grundemann
or my website, chrisgrundemann.com.
And then from there, you can kind of branch out
into any of the other places you might find me.
Thank you so much.
And you can find me, Stephen Foskett,
on Twitter, at S Foskett.
You can also find me on LinkedIn and at gestaltit.com.
And I really appreciate you contributing to this podcast.
And thank you, everyone everyone for listening as well.
If you enjoyed this discussion,
please do remember to subscribe, rate,
and review the show on iTunes,
since that does help our visibility.
And please share the show with your friends.
This podcast is brought to you by gestaltit.com,
your home for IT coverage from across the enterprise.
For show notes and more episodes,
go to utilizing-ai.com or follow us on Twitter at utilizing underscore
AI. Thanks a lot for listening and we'll talk to you next time. you you