a16z Podcast - a16z Podcast: Automation + Work, Human + Machine
Episode Date: November 5, 2018with Prasad Akella, Paul Daughtery (@pauldaugh) and Frank Chen (@withfries2) What is different on that factory floor from Henry Ford to today? In this conversation, Prasad Akella, Founder and CEO of D...rishti; Paul Daugherty, Chief Technology and Innovation Officer of Accenture, and author of the recently published Human + Machine: Reimagining Work in the Age of AI; and a16z operating partner Frank Chen, talk about how the introduction of automation from Henry Ford to now co-bots and AI all change the work we do in manufacturing and beyond. What are the skills that we’ll need in the future? What kinds of new information is available, and what new needs -- for dynamic adaptive processes, for example? What are the new tool chains and core (organizational and technical) habits of ML/AI-centric companies of the future?
Transcript
Discussion (0)
Hi and welcome to the A16Z podcast. I'm Hannah, and in this episode we talk about automation as it impacts the way humans work now and how it will transform the work of the future, beginning on the factory floor with everything from co-bots to AI.
Joining us are Prasada Kela, founder and CEO of Drishti, Paul Doherty, Chief Technology and Innovation Officer of Accenture, an author of the recently published book, Human and Machine, reimagining work in the age of AI, and A16Z operating partner Frank Chen.
What has changed on that factory floor from Henry Ford to today?
What kinds of new information is available and what new needs?
What are the core organizational and technical habits of an ML AI-centric company of the future?
We're here to talk today about how AI machine learning is changing our workplaces and beginning on the factory floor.
Prasad, this isn't the first time you've tried to automate the activity on the factory floor.
Talk a little bit about your history with this field.
94, I land up in Ann Arbor, Michigan,
and the Occupational Health and Safety Agency
is threatening General Motors with a big fat fine
because they weren't taking enough care of worker health on the line.
Now, if you pick up something as simple as a battery,
one a minute, eight hours a day,
I promise you you will come home ridiculously sore.
And try and pick up a 150-pound cockpit,
fully built cockpit that's coming off from some supplier,
and try to get it through the door opening.
And it's famously called the piano-moor's problem.
We're coordinating six degrees of freedom
trying to make it through a narrow opening,
and it's even worse.
So the question on the table was,
can we take robotic technology
and make it work with people?
We created an entire category called collaborative robots.
In fact, we opened the first chapter of our book
talking about co-bots at the Mercedes E-Class factory
in Fingelhaus in Germany,
where they actually de-automated the production scale,
large-scale robots to smaller-scale co-bots
and achieved dramatic levels of performance achievement
by putting more people in the factory with co-bots rather than less.
Maybe take us through the history of factory automation.
Start with the Model T in modern day.
Structured work, industrialized work,
is really only about 120, 140 years old.
It goes back to the turn of the 1900s, the 20th century,
Henry Ford and factory automation.
We call that the first generation of work
first generation of business process.
It was Frederick Taylor, scientific management,
taking physical activity of the hands of people
and embedding it in industrialized manufacturing processes.
And that was the first wave of real work in business automation in that sense.
Then in the late 90s, we had the first wave of information technology.
So we had PCs and we had the knowledge worker,
and we got into this idea of re-engineering the corporation,
which was a big theme back in the 90s and early 2000s.
And that was about automating processes
using information and empowering knowledge workers,
still very static, inflexible processes
is what we created.
We built big information systems
to automate these processes
that were very inflexible.
What's needed now is this third generation of work,
which is where the work itself
is dynamic, adaptive, personalized
to the nature of the person
and the process they're performing.
Think about an example,
like the digital twin models
that are being used in industrial manufacturing,
where a worker knows that the specific characteristics
of a windmill or a job,
jet engine that they're dealing with in real time and can change their maintenance procedure
and what they're doing based on the individual characteristics of the piece of machinery.
And that's the increasing trend that we see isn't just that it's a better way for people
to work and manufacture. It's what's happening with AI more generally in every industry,
which is the trend for mass commoditization to dynamic mass personalization.
The most sophisticated automation we've ever built, we're actually introducing more people
into the factory line to get this mass customization.
So the reason you need a more flexible factory floor is that customers want their car exactly the way they want it
and the optionality that you need in the manufacturing process to truly personalize a product
means you can't do it the old way. And you need to use AI really at every step in the process from the demand
anticipation, helping the customer shape the product that they want, and then also manufacturing and
configuring the product. Lot sizes of one, right? Lot sizes of one. And it's the momentary markets.
Every individual's need is creating a market of one opportunity to solve.
that. And that's really where companies are moving to with the third generation of business
process, which is dynamic, adaptive, personalized business process where the work every
person does every day in the way you solve every problem isn't a static industrialized
process. It's a dynamic personalized process. And that's the transition that businesses are
starting to go through. From a historical perspective, some things have changed and something just
haven't changed. Yes, motors are getting better, sensors are getting better. But on the
cognition side, actually trying to marry the cognition of the human being with that physical
amplification. I think Andre Caparthe says this. It's really data that's driving programming now.
It's not logic. And that, I think, is a single biggest change that I've seen happen over the last
25 years. Yeah. What he's building is kind of what I think of sort of the new tool chain for
software. So in the old days, we built compilers and linkers and debuggers and editors and so on. And in the new
day is what Andre and many machine learning startups are building is labeling pipelines, right? Like
at Tesla, the labeling software is, it almost looks like Photoshop. It has so many buttons
in it, right? Because you have to be so specific with your labelers. And so it's sort of exciting
to see this new tool chain emerge to support machine learning program. So the factories is a
terminal part of the supply chain. And the supply chain actually goes layers, as tiers of supply
chain, right? I just think that we're going to see this continuous evolution touching all
parts of the supply chain. At one level, like I said, Kobats are doing the bits to atoms
engagement. Tristee is doing the bits to bits, which is we are actually pulling information
on the floor and actually helping the worker do better. There's another company I sit on the board
of called Optessa, and we actually do what's called sequencing and scheduling. So this is based on
demand coming in, how do you actually mathematically optimize the sequence in which you build
cars? All of these are touching manufacturing in just very fundamental ways. And it's unknown
to the person buying a car today that his car actually got scheduled by an AI system. Let's take
a basic problem, this lot size of one. You need to think of lot size of one in the context of what's
called build complexity of build options. So if you take the most popular truck
in America, it's the F-150, the Ford F-150.
It turns out there are a trillion build combinations of that vehicle, right?
So you have different engines, you have different seats, you have different radios,
just tires, rims, it's a trillion choices.
Now, I'm a guy on the line now.
I'm building out that truck.
My head is reeling thinking about a trillion build combination.
So this is where I think the challenge comes in, where a human being, even though a challenge
can actually handle that, as opposed to automation, which needs to be laboriously programmed.
So we used to redo lines at GM when I was there. We'd go on the floor. It takes us six months
to retool the entire plant. Actually programming every one of those behaviors in.
Now, instead, if I was in a line in some other part of the world, I might just put 100 people
and tell them, guys, this is what we're going to do. And guess what? Programming was like that.
So I think that's the fundamental thing that humans can do things that even machines can do today.
Flipside machines can do things that people can do.
It does things I can do, reprogram paths on the fly.
Look at traffic patterns.
I don't have guard-like ability to understand.
But I really think that's where people and machines on the floor continue.
And I think people are here for the long haul.
People are using the technology in a different way.
One is, again, the optionality and the configurability.
So worker can dynamically change the way that they're,
manufacturing a part in customize the way it's assembled. It can be a different feature for a different
customer, which is important from that perspective. The other really significant breakthrough we're
seeing, it's not necessarily the programmer that needs to do it anymore. It's not the teams of the
technicians. If you're familiar with Baxter robots and things like that, they're configurable by
the worker themselves to train them how to do these new tasks dynamically in real time. It's a continually
adaptive manufacturing process, not with programmers coming in and reconfiguring things, but with
the worker changing the nature of the work as they go. And that's what we're going to
see more and more of as we go forward.
Yeah, I love this idea of sort of we can combine the best of what humans can do,
which is adaptability to any situation, and then what robots can do, which is sort of
precision and repetition.
Paul in your book this idea of sort of there's a missing middle, right?
There's sort of human-only activity, and then there's sort of machine-only activity, and then,
like, most of the excitement is sort of kind of right in the middle there.
And we called it missing because it was missing from the dialogue.
This was a bit of a surprise to us.
We started to talk to 1,500 companies around the world.
and thousands of workers who were using automation, AI, robotics,
in different ways to learn what was happening.
The companies that were having more success
were those that were really applying both the human
and the machine capability together.
We started using the term collaborative intelligence.
It's about the machine capability,
the AI, coming together with the person.
And so what we saw in the missing middle,
we dove into it, is that there were two big families
of jobs being created.
There was one family of jobs,
which is where people are needed to help machines
in the category of behavioral trainers for AI.
If you're developing a chat bot for an airline,
you want it to behave in a certain way,
different than a gaming company,
different than a bank,
needs to reflect your culture, your values,
because AI is becoming the brand of companies.
AI is defining the brand.
And in that environment, you need to train your AI.
And it's not a one-time thing, what it done,
because the business is continually changing
and the way the AI is behaving is continually changing.
So we're hiring roles as are many of our customers
for trainers.
and they're sociologists, people who understand business and people and behavior and are tuning and training the AI accordingly.
We also talk about sustainers and explainers, which are other jobs that you see now hiring at scale as they look at dealing with some of the issues they have.
So that's all in this family where people are helping AI, machines be more effective, in addition to co-bots.
Then the other family of jobs is where machines are giving people superpowers and augmenting people with new capabilities, everything from exoskeleton types of technology we're seeing being used.
used in assembly and manufacturing jobs to the wingman chatbot that sits on the shoulder of a
customer service agent or an investment advisor to help the person focus more just on the customer
communication and less on all the technicality of how they need to search for information and do
their jobs. So the chatbot, not for the end customer interaction, but to help the person
be more effective at dealing with other people. There are small categories. There are millions and
millions of jobs that are being created as we move into this AI environment. There's not just new
class of jobs being created, but existing class of jobs being re-transformed. Take the very humble
industrial engineer when Henry Ford rolled out the Model T. And he realized that he needed people
who could figure out what was going on the line and therefore created the industrial engineer.
Now, the industrial engineer's job at one level hasn't changed in all these years. He shows up
on the line with a stopwatch and a piece of paper. Now, most sophisticated industrial engineers at
Stanford or MIT, you would know operations research, would understand, you know,
Simplex or whatever else they want to go solve.
But the fact of the matter is the bulk of the population just spends, according to our
customers, 30% of their time just getting data.
We're fundamentally using computer vision and deep learning to create a new data set.
And really the data set is about people at work.
And in the context of the manufacturing floor, it's about extracting
tremendous improvements in quality, in productivity, in traceability, all aspects that have
been here for 100 years. But what machines have now let us do is improve that in a very dramatic
way, working with people. So now imagine, because of the dataset, the industrial engineer
doesn't need to spend 30% of his time or her time making measurements on the floor. And those
are measurements that are fundamentally erroneous. We all know the Hothon effect. We know
biased data sets, small sample sizes,
all those kinds of problems just are history now.
The machine can solve incredibly complex problems.
Think of line balancing across 141 stations,
which is what this Apple iPhone in my hand has built across.
And trying to do a line balancing across 141 stations
is almost humanly impossible.
But the machine can crank that out.
And now the job with the industrial engineer
is to interpret it and actually make change happen.
So it's new jobs and existing jobs,
both of which are now super-powered, if you will,
and in a way it's never possible before.
Often we get stuck on this idea that the machines are taking all the jobs.
But when you look at what really goes away,
the research that we see shows that about 14% or 15% of jobs
can get totally eliminated.
Then about a third to 40% of the jobs
gets significantly transformed,
which is what we were just talking about.
And then the rest of the jobs have some degree of transformation to them as well,
but maybe at a bit of a more moderate scale.
Most of the jobs get impacted and changed
in some significant way into more of these missing,
middle types of job categories. The public discourse is that robots are taking over jobs,
right? And then SkyNet's coming for your children right after that. Right after that.
So it turns out that there are about a million and a half robots on the entire planet today.
The capacity of the planet to produce more robots is about a quarter million ramping up to a
half a million annually. And there are about 340 million people in production jobs on the floor.
And there's some stats out there that suggests roughly 5.6 jobs get eliminated to
Paul's point of 15%, with every robot being introduced.
So if you do this math, and you take the 340 million, roughly divided by six jobs per,
you're looking at 60 million robots before you can wipe out all of mankind in any production facility.
It ain't going to happen in my lifetime.
I don't see it happening from probably another lifetime or two.
And I think that's the reality on the ground.
And one of the first things we have to do when you get on the floor is work with the operators
and, eight, address the explainability, what is the system doing, how is it doing,
and how it's going to help you.
But the more interesting question is to reinforce
that we're actually increasing the longevity of their job.
By improving their efficiency,
actually, guess what?
You're pushing back the onset of a robot.
Yeah, one of the paradoxes is that
we are currently living in an era
where we are more automated
than we have ever been in history.
We are also in an era
where there are more people employed
than have ever been in history.
Why isn't it the case
that we don't have 20, 30% employment globally?
if you go back the history of us as humans is the creativity of people inventing tools to help us be more effective in developing and enhancing civilizations from the first stone tools that were used for hunting and food gathering and the like onto all the amazing advances that we've developed as a civilization and the same is really true when we look at AI we're not creating artificial superhumans anytime soon with artificial general intelligence or superintelligence or transhumanism what it is is very narrow capabilities to do.
amazingly powerful, narrow things much more effectively and much better than humans could ever do
because machines are better suited to these problems than people are pattern recognition,
memorization, prediction problems and the like. So that's the kind of transformation to dynamic,
adaptive, personalized work in this third generation of business process. If you think about the
continuum in a slightly different way, you started with craft manufacturing. And then Henry Ford showed
us the way of mass production. And then in the mid-50s come along the Japanese. And they show us
lean manufacturing type of production system.
And they basically figured out
that they need to empower the worker on the line.
I think where
lean manufacturing has all of the
process elements,
what they have not had the benefit of so far
is measurement.
What measurement does is it lets
you actually act on it in a much faster
time scale, whether it's manufacturing
or any business process. The faster
you can measure, the faster you can
iterate and grow faster.
It's all coming down to,
are you savvy enough to use the data that's available to you,
the unique data that was never before available,
and are you willing to make the transformations that you need to make?
Yeah, so measurement will be one of the core habits
of a machine learning AI-centric organization.
And so if measurement is one of the core habits,
what are the other core habits of a machine learning-centric company?
I think the single biggest thing that any AI-native company needs to do,
is to start with that measurement question
and maniacally focus on data and data quality.
It's interesting that most of us are not trained to think
in stochastic ways, probabilistic ways,
and we aren't quite used to computing odds.
That's the real question.
The second is we talked about this a little while ago
about building the scaffolding the tools
to enable the experimentation that you just said.
So the web does it beautifully.
they have ABA testing.
Why can't the rest of the world adopt that?
There's so much to be learned from that, right?
Because you've got measurement,
you've got the ability to process it today,
why not turn the entire loop on yourself and do something with it?
The third, I think, is just people.
As we are out trying to hire people,
we come across any number of people who claim their data scientists,
and their own sophistication is oftentimes absent.
How does pooling work?
Why are you designing your network in a certain way?
It's that deeper insight into deep learning or AI tools in general that I think is not coming through.
People show up and say, I'm trained, but really not quite trained, right?
So that's one entire class of problems.
The second, I think, is as product people, we've got to be hiding AI from the users.
I know this is a strange concept to put in front, but I'll take two examples that all of us deal with every day and then comment on that.
So the first one, every one of us types in our computer, we're typing email.
And as we make spelling mistakes,
a spell checker behind the scenes is underlying these words.
The words, I made a mistake typing up, right?
And the beauty of that is, I fix it,
and I send my mail out, and nobody is the wiser.
Because between me and my system, we've figured it out.
And what has happened behind the scenes is that AI
has just naturally surfaced itself.
It may be very simple in that case.
But if you take something like Waze,
all the sophistication of the traveling salesman problem,
of trying to find dynamic roots, all of that, is again hidden.
All I know is I need to make a left turn here.
And by actually hiding AI, you actually drive greater usage.
And I would argue that good AI company does is to hide it.
So I guess really I'd say build the tools, hide the AI, find the right people,
and focus on data would be my model.
Nothing has moved as fast across the organization as AI is moving right now.
No trend has grown as fast in terms of real spending in terms of headcount focus on it,
measure you want to use AI is the fastest growing trend we've ever seen in terms of enterprise
impact. It's also the first one of all those trends that's remarkably diversified across
industries and across parts of the organization. There's five behaviors we talk about, and we
came up with this acronym called MELDS, M-E-L-D-S. So the M is mindset, the mindset around this
re-imagination of work, which is thinking about the work in a different way and giving your
organization the courage to rethink the way you do it, the digital twin models and manufacturing,
virtual agent-assisted models in the front office, new AI-enabled R&D, etc.
The E is experimentation.
There's no way you'll design this and get it right
by building a Big Bang system and making it all up at once.
Experimentation is critical.
Think about Amazon Go and what they do with retail
and how they introduce the capability.
First, their employees, got it right, started rolling it out,
and then had a successful idea.
And sometimes things go wrong, and you have to be ready for failure,
which is a big change for many companies.
The experimentation we see as one of the critical success factors
and those who are successful.
L is the leadership element around this,
which gets into doing this right
and doing it with recognition of the consequences of AI,
the bias, the transparency and explainability,
the dealing with the workforce and people issues
in a straightforward and in a real way.
D is the data, which gets into measurement.
The number one thing that slows down
the companies we talk to and work with
is the lack of data and the measurement
in the right way to fuel the AI systems.
And so getting a data supply chain in place
to fuel AI is really critical.
And then S is skills.
And the skills is often overlooked.
It's not necessarily the investment in AI.
It's the investment in your talent to do the right things with AI.
That'll be a differentiator.
And the skills point is a really important thing to get right.
A new approach to skills, which isn't hire good people and then give them a little training now and then.
It's creating a learning environment so you get the best learners in your organization
and continually empower them to move as your business adopts and moves going forward.
So we've talked a lot about how AI will give human superpowers on the factory floor in marketing, so on and so forth.
But there are real challenges to building these systems.
What are some of the challenges?
I think the central challenge in front of us is how do you actually get these models to generalize
across broader swaths of industry?
So when we solve a problem today, we look at it in a much narrower context.
We solve it and we look for how much of that can apply to the next.
As opposed to think about it and say, can I come up with an architecture that will work across the board?
I think that's the single biggest challenge sitting in us and a broader deployment of AI.
in society.
Yeah, it's what caused one of the earlier winters in AI, right?
Expert systems failed to generalize, right?
Yeah, every time you had a rules-based system, you added a new rule, guess what?
All your data went out of the window, the behavior of the system changed.
You really went back to the drawing board to figure out what to do.
Now, I'd say the other thing in terms of challenges that we all need to be looking at
is the human side of applying the technology.
I mean, technology is neutral.
It's how we choose to use it that dictates whether it's going to be a good or bad impact on things.
and that's why responsible AI is something we spent a lot of time focused on
and accountability needs to be very clear.
Transparency is a big issue.
There's a common line that people say, well, AI can't be explained, it's a black box,
not acceptable for certain processes.
And there's certain things that it is a company you need to be able to explain,
and you better use explainable AI in those situations
or you're going to get yourself into trouble.
And then finally, data is inherently imperfect.
Every data set has bias.
AI is a remarkable, effective tool at scaling the data,
and it can scale the bias, unforeseen bias, in similar ways.
If we don't get this right, we run a risk of adverse consequences
that'll slow down the remarkable advances that we can innovate for society
because we'll just be getting in our own way.
Here's a fun Google search you can do for Director of Research and Ethics in AI,
and you'll see that there are hundreds of jobs that have sprung up,
especially at the big companies that are really grappling with these issues.
So there's a new job for you, which is ethics and fairness in AI.
So it's great to hear the catalog of here's what a machine learning native company will look like.
Why don't we talk about people and how they retool themselves to be the most effective workers that they can in this coming world?
Because look, how do they prepare themselves for this new world?
I think there's a number of different elements of that question.
The issue is how do we think about the system in the way it works and then think about the task of a worker we're automating, which is separate from the job.
So the long haul element of the trucking you do, okay, how do you take the skills those people have
and transform them to be the remote truck operators and pilots, which some startups are working at?
And how do you solve the problem where today we're short about 100,000 truck drivers in the U.S.
And how do you make the job more attractive to them using AI in automation?
For example, there's companies that are working on AI for routing management and optimization
so the drivers can get home every night and see their family.
It makes the job more effective and more rewarding to them.
When you'll dive under the skills that you need to do that, generally it's human types of skills that you need to accentuate.
It's things like improvisation, things like intelligent interrogation.
It's extrapolation.
A.I. is great at interpolation. Humans are great at extrapolation across problem domains.
It's communication. It's emotive response and lots of things that while there's advances in AI in those domains,
they're going to be uniquely human strengths for a long time to come.
So how do you train somebody to know where to look and how to look for assistance in the job they're doing?
or judgment integration, somebody that needs to make a decision,
how do they use a tool, an AI-enabled tool,
to make a decision more effectively,
for example, in the compliance process of a bank
as we're applying that kind of skill set more.
You need to train people differently
to get them comfortable with the technology
and that cascades its way all the way through back
into the educational system
in terms of how we start making some longer-term changes
to prepare the next generation for the work to come as well.
I think they're creators and there are consumers.
And the skills we want are the creators
and the consumers are slightly different.
On the consumer side, I think Paul's absolutely correct.
How do you interrogate the query system?
How do you make sure that the conclusions it's drawing or it's suggesting to you actually make sense?
How do you actually do what if scenarios around the data, right?
That's how you actually build the judgment using the machine to do compute that you can never do yourself.
On the creation side, it's solving one of the hardest problems, I would say,
which is to clearly articulate a problem and map it to an AI technology.
technology in general that is the best suited to it. We see this when we're trying to do data
analytics. We're most comfortable with Gaussian distribution. It actually turns out that may not
be the distribution to use. And guess what? You're doing math using the wrong distribution for
the wrong behavior and you're getting garbage out. There are few things happening very often
and many things happening very infrequent. I think it's called the long tail. And if you don't
quite understand what the behavior of your system is that your math doesn't work out. So that's
the way I think the most important skill is mapping problems to techniques and that's how you
really get value. So that's on the creator's side, that's what I look for. That's funny. That's
actually how I look for AI powered startups because, look, AI is a fundamental set of computer
science techniques. They're getting into all of the software that we build and therefore all
the startups that we see. And so what I'm looking for is have you chosen the right technique
to solve the right problem, right?
Because that's what will differentiate
sort of the sophistication
of the technology that people are building.
There's the people who do AI
and the people who use AI.
And when we talk about the workforce,
we often confuse those.
So we say stems the answer to everything
and get more people in computer science.
We need that for the people who do AI
and develop the AI systems that create.
The people use AI is a much bigger part
of the workforce and something
we really need to focus on.
AI is going to evolve
over the next decade, two decades,
in the way it's supplied.
in companies. And so the challenge is how do you create the lifelong learning mechanisms so your
employees move along with you at the pace you're innovating and that they're ready to do the
jobs you're creating. And that's the mindset you need to have. And if you're not investing to
create that infrastructure, you're going to be left behind as a company. I think of two broad
buckets of things that we can do. So in the one bucket, I think of sort of very sophisticated
training that would allow you to understand what the AI systems are doing and then you can
sort of apply human judgment around that. So there's this great study that says if we
take human plus AI. So machine learning algorithms that recognize a particular type of skin cancer,
and then we bring highly trained doctors alongside them. The combination of the humans and the
algorithms outperform algorithms alone or humans alone. So that's pretty sophisticated. You take a
radiologist and you give them special training on here's what the AI systems are doing and
here's the biases they might have. On the other hand, I remember the book was popular here in the
States in the 80s called All I Really Needed to Know I Learned in Kindergarten. The advice that came
out of the book was share everything. Play fair. Don't hit people. Clean up your own mess. So these
are things about empathy and working on teams and understanding each other. And I think there's a broad
class of things that we need to redouble down on in our education system because these are the things
that are going to be hard to automate, right? Automation, curiosity, a sense of fair play. We're training
people to do machine things. And we shouldn't be doing that. We should be training people in
uniquely human capabilities and how to exploit those. Thanks so much for joining us, Paul and
Prasad. Thanks for joining us on the A16B podcast. Thanks, Frank.