3 Takeaways - The Future of Artificial Intelligence: Head of Columbia University’s Creative Machines Lab Hod Lipson (repost) (#78)
Episode Date: February 1, 2022We are at an inflection point in artificial intelligence today. Find out what the next 3 waves of artificial intelligence will bring — including creativity and consciousness. Learn why AI is acceler...ating and what we can do to ensure tech is used for good.Hod Lipson is the head of Columbia University’s Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative. His work focuses on evolutionary robotics, design automation, rapid prototyping, artificial life, and creativity.This podcast is available on all major podcast streaming platforms. Did you enjoy this episode? Consider leaving a review on Apple Podcasts. Receive updates on upcoming guests and more in our weekly e-mail newsletter. Subscribe today at www.3takeaways.com.
Transcript
Discussion (0)
Welcome to the Three Takeaways podcast, which features short, memorable conversations with the world's best thinkers, business leaders, writers, politicians, scientists, and other newsmakers.
Each episode ends with the three key takeaways that person has learned over their lives and their careers.
And now your host and board member of schools at Harvard, Princeton, and Columbia, Lynn Thoman.
Hi, everybody. It's Lynn Thoman. Welcome to
another episode. Today, I'm here with Hod Lipson. Hod is one of the world's most innovative thinkers
in artificial intelligence, self-aware robots, and digital manufacturing. He designs and builds
robots that do what you'd least expect robots to do, self-replicate, self-reflect, and ask questions.
His robots are also creative. To quote Hod, I want to meet something that is intelligent and
not human, unquote. But instead of waiting for such beings to arrive, he is building them himself
in the form of self-aware machines. Hod is a professor at Columbia and director of Columbia's Creative Machines Lab.
He's co-authored over 300 articles and co-founded four companies to date.
He believes that artificial intelligence, or AI, worms its way into everything, no matter
if you are a journalist, a politician, medical doctor, or business person.
He believes you need to understand what AI can and cannot do
and how it will shape our world. Welcome, Hod, and thank you so much for our conversation today.
Pleasure to be here.
You've talked about the three phases in AI. What are they and where are we now?
You know, AI is a very fast-moving technology. And one of the things that's been tricky to do is to, it seems to be coming out of left field all the time.
Nothing happens for a long time and suddenly a new technology, a new capability appears and a lot of possibilities unfold.
So I've been trying to think about how to sort all this chaos in AI.
I've seen it sort of move forward in a couple of different phases. So yes, today we are at the third phase, I think, in AI. I've seen it sort of move forward in a couple different phases. So yes,
today we are at the third phase, I think, of AI. I can see three more, which we can talk about later
coming, but we're now in the third phase, which is something we call cognitive computing.
It's the ability of machines to understand things like video, images, natural language, things that up until maybe a few years
ago, we thought impossible. So that's the third phase. But if you roll back the clock a couple
of decades back, I would say the first phase, which maybe was 40 years of research, was rule-based.
It was a period where we thought that computers were sort of AI was this machine that's based on executing rules very systematically, like logic, reasoning, chess-like thinking, right?
And there's a lot of science fiction movies around computers executing rules and being very, very rigid.
That was the first 50 years.
It turned out that does not work. You cannot
build an intelligent system based on rules. It's a fantasy. It never did go anywhere.
The second phase, which started in the 90s, was data analytics. It's this idea that you give
machines columns and columns of data and spreadsheets and databases, and then it can
predict the next row and column.
And that's how a lot of stock market stuff is done, predictions, retail, all based on predicting the next row and column in lots of different systems.
That turns out to work very well for certain applications, where are very quantitative
and orderly, but doesn't work well for video and audio.
And today we're in this third phase where finally machines can do
something trivial to us, like recognize a dog or a cat. And that is a breakthrough with incredible
repercussions. Can you talk about those repercussions? So, you know, we take it for
granted that we look at a pet and we can see if it's a dog or a cat and we can make decisions
on that. But it turns out that this ability is actually a lot harder than playing chess
or doing something like that. It turns out that if you can tell the difference between a cat and
a dog, that's actually an incredible accomplishment. You should be very, very proud of it
because it's something that we take for granted, but it's an incredible accomplishment.
It's so difficult to do.
It's just that we're so good at it, we don't even appreciate how difficult it is to do.
And only when we started programming this into computers, we realized how difficult
it is to do that.
So now the computers can understand what they're seeing.
You can do things like drive a car autonomously. The ability for autonomous vehicles to drive themselves is entirely a consequence of finally
figuring out a way to have the AI understand what it's seeing, understand the role.
What's drivable, what's not drivable?
What's an obstacle, what's not an obstacle?
What's another car?
What's a road sign?
All these different things were not possible before. Now they are. And autonomous vehicles by themselves have numerous
repercussions on the economy and so on. And that's just one example. Another example is
medical diagnostics. An AI can look at a camera image of a skin lesion and determine if it's
cancerous or not, or an x-ray or an MRI and determine if it's cancerous or not, or an x-ray or an MRI and
determine if it's a malignant or not, better than the average doctor. Entirely consequence of this
ability to understand videos and images. The list goes on. Another example that I love
is Amazon Go. I don't know if you had a chance to shop at Amazon Go. For those of you who haven't done that, it's an incredible experience where you go,
you grab anything you want off the shelf when you walk out, and you still pay. But it's because
there are these cameras that watch what you're doing in the store, and they can understand
what you're putting in your bag and what you're keeping and what you're putting back on the shelf
and so on. So it's an entirely different retail experience and it's entirely enabled by this new
AI. And I can see how this was a cool thing a couple of months ago, but now with COVID and
everything that's happened, you can see how this kind of technology will enable a lot of things
to remain open and remain accessible when we want to reduce in-person interactions and so on.
And this kind of technology, again, will worm its way into
transportation and retail in almost every aspect of our life. These are three examples, and there's
lots more. When you talk about the rules-based versus understanding, can you elaborate a bit more
to your example of dogs and cats? What is the difference between a rules-based with hundreds of millions
of images of dogs and hundreds of millions of images of cats versus an understanding
AI?
Let's say you wanted to make a system that would detect fraudulent transactions
in a bank, a fraud detection system.
So if you do it rule-based, you hire an expert, and the expert will tell you,
here are the rules for detecting a fraudulent transaction. If somebody spends in one day,
three times more than they spent in the previous month, that's probably a fraudulent transaction.
Flag that transaction, right? So that's a very simple rule. The computer can take that rule.
Somebody can code it in. The computer can run that rule, execute it on a gazillion
transactions a second, and flag all the suspicious ones.
Bam, it works, it's fast, it's efficient, everybody likes it, the experts like it, the
banks like it, customers like it, regulators like it, everybody likes rules.
You like it, it's fast, you understand what it's doing, it's very good.
The only problem is that if you want to improve it,
you got to hire a new expert. You got to figure out a new rule. It's very difficult to improve
this with time. Now, the alternative, the data-driven approach is you don't tell the
computer what to do. You show the computer. You give it examples of a thousand fraudulent
transactions. And you don't say anything. You just show the computer a thousand examples.
The computer can look at these thousand examples and figure out automatically what's the
statistical signature, let's say, of a fraudulent transaction. Look at nuances that we can't even
articulate. And it gets this gut feeling for what is a fraudulent transaction just by looking at a
thousand examples. And after that, it takes it from there and you can find more.
And the beauty of that approach is that you don't need an expert. You don't need a rule.
In order to improve it, you just need to give it more examples. And examples are usually easier
to find than experts and rules. And so as that system can keep on getting better and better,
the more examples it sees, the better it gets.
But there's a cost to this approach. And the cost is that you don't understand how it's making its
decision. It's opaque. I liken these two approaches to the difference between what we call
intelligent reasoning, logic reasoning, which is one approach to intelligence. And the other one is intuition,
right? So intuition, gut feeling is another form of intelligence. One that you cannot articulate
in rules. You cannot explain, but you got a feeling of what needs to be done. That's the
new AI. It's this gut feeling that the AI developed. It's incredibly powerful. Now,
back to your question about the dogs and the cats. If
I asked you to tell me how you tell the difference between a dog and a cat, I think you'd be hard
pressed to tell me. It's actually pretty hard to explain that. The way you teach it to a child
is that you show them a couple dozen dogs and cats. You don't tell them. A cat is usually smaller.
Usually, they have pointy ears, but not always.
You don't say that.
You show them examples.
We humans, we learn by examples.
And that's what a machine needs to do.
Are we at an inflection point?
And what does that mean?
So the transition that machines can suddenly understand from examples in deep ways happened
around 2012.
So it was a very crisp inflection point.
It's not a gradual thing
that they're getting better. And now the machines can understand things by looking at examples.
It is an inflection point in terms of new kinds of applications. Like I said earlier,
driverless cars and understanding video, understanding what people mean when they
say certain things, natural language processing, all these different things.
The people just couldn't figure out the rules.
I mean, you should have seen how we taught AI five or six years ago.
It was almost like there's something magical about humans that we can't figure out.
The computers aren't even getting close to what humans can do.
But now, this is all falling apart one by one.
All these things are going away, and we are certainly at an inflection point.
Can you tell us about all these things, one by one, that are falling away?
You know, it's difficult to say because, you know, we play this game in the lab where we
take some arbitrary topic, and we figure out how it can be transformed using these
new technologies.
And it takes a very short period where you suddenly, whatever it is, gardening. I don't know,
the other day we were thinking about how can you change sock manufacturing? Whatever it is that you
care about, it's transformed by the ability of machines to learn. It's an incredible sort of journey to go industry by industry,
application by application, and see what new possibilities emerge.
It's really infinite.
When you look at startups in this area, the question usually is what not to do.
It's how to prioritize all the different things that you have to do.
You have this new technology, a machine that can understand things and learn what you apply it to.
And so there's some, the lowest hanging fruit from a financial point of view tend to be medical right now.
Things like medical, things where you have high paying jobs that make decisions based on data.
That's mostly images and unstructured data. That seems
to be sort of the lowest hanging fruit for this technology from a financial point of view,
but applications are all over the place. What do you see as some of the most
interesting applications? So to me, the number one application is indeed medical. I think that medical applications are by far the most important in the sense that they
allow diagnostics, for example, to reach the entire planet.
It's not just about improving the accuracy of detecting cancer, for example.
It's about bringing that ability to billions of people who do not even have access to doctors at all.
And this is what AI does.
The same thing with driverless cars for me is another passionate area because so many
people lose their lives to fatalities around transportation, and that's out of fingertip
resolve with autonomous vehicles.
So to me, these are very important applications.
There's a whole area of research that we're doing in agriculture, which involves drones that we have one developed with National Science Foundation
support. These drones that fly over cornfields, and they can look down at every leaf of every plant
and spot minute signs of northern leaf blight that kills about 13% of crops if untreated,
and they can spray just that plant instead of the entire field.
So imagine what that would do to organic, to saving the planet, to increasing yield,
to whatever, to agriculture in general. And that kind of technology can be
deployed for any kind of crop, any part of the world. So these are just small examples. Security is a big thing of
understanding, recognizing, detecting fraud, for example, fraudulent transactions that we
talked about. So security, lots of applications there. The list goes on, really. Remember,
that's just the third wave. There's more to come. As far as that more to come, tell us about AI
teaching other AI. So AI teaching AI is, before we talk about these other waves, is one of the drivers.
One of the questions that people frequently ask is, what is driving AI and making it move
so forward?
After all, it's been around for a long time, so why is it accelerating?
And it turns out that it's accelerating at an exponential rate.
And when I say exponential, I don't mean that figuratively as in getting faster.
I mean that doubling at a very rapid rate.
And there are a couple of factors that are driving this acceleration. Some of them trivial, like computers are getting faster, cheaper, and better at an exponential rate, doubling every 20 months.
Data-driven AI is accelerating because data that's
fueling the AI is doubling every six months or so. The AI brains that learn are doubling every three
months or so in their size. That's an incredibly powerful thing. But I would say the thing that
is accelerating AI the most right now is the fact that AI can teach other AI.
And that is an incredibly powerful thing.
And again, to give an example, if you have, again, a driverless car, okay, that drives around and it gains experience and it sees some peculiar situation, it can share that
experience with all other cars.
So in a way, they all have this shared experience.
So we humans can have one lifetime of experience of driving,
but a car can have many lifetimes
because it can share the experiences with all the cars.
So in a strange way, the more driverless cars there are on the road,
the better each one of them gets.
This is very different than the way humans learn.
So this hive mind allows them to teach each other. And this is a
very, very powerful paradigm. Your doctor doesn't get better because there are more doctors,
but an AI doctor gets better because other AI doctors experience more things, share that
experience, and very quickly, they learn from each other. So it's a very powerful paradigm.
And you can sort of smell the self-accelerating, the self-amplifying effect of machines teaching
machines.
They get better.
They teach each other.
They self-amplify.
They accelerate their exponentially.
Another driver that you call the mother of all designers is evolution.
Can you elaborate on that?
There's a couple of paradigms of artificial
intelligence out there. And machine learning is these neural networks, these deep learning
systems that I was talking about earlier is one paradigm. And this is something that learns from
examples. It's a particular type of intelligence. That's maybe the way that we humans learn. Okay,
we see things, we learn, we experience,
but there's another kind of intelligence on this planet that is not like that, and that is evolution. So evolution in nature arguably designs amazing things, but it doesn't do that by learning
in the same way that we humans do over our lifetime, it learns in a very different fashion, almost more random.
There's a lot of random exploration and survival of things. Increasingly, we see more and more
symbiotic relationships, so things challenge each other in a fruitful way and everybody rises.
Turns out these ecosystems of innovations happen all the time. So that evolutionary process is another kind of AI that turns out to be very fruitful.
And that kind of brings me to the fourth wave, which is the wave of creativity.
And this is where evolution really shines in its ability to create new ideas that are different than examples that it's seen before.
So when you look from examples, you tend to stay inside the box, maybe a little larger than what
you've seen, but it's hard to completely come out with new ideas if you've only seen certain
examples. Evolution is very good at generating thinking outside the box. Again, this artificial
evolution, as we call it, has been around for decades, but now with
computing power being so abundant, you can really let it loose.
It's being used to innovate in everything from engineering creativity, which is like
designing a new circuit or something like that, to even artistic creativity and new
music and art and so on.
When I think about the waves of AI,
that is certainly the next wave. It's the ability of machines to create new ideas that they haven't seen before. Again, one of these things that we think are uniquely human, creativity, but we're
on the cusp of handing it over to machines. And I think at that point, all bets are off because
once you have creative machines, there's no limit to what we can do.
We can start handing over some of the challenges that we can't solve and seeing what machines come up.
So what happens when AI starts looking inward to itself as opposed to looking outward at the world?
Yeah, so now we're jumping to the sixth wave.
Oh, I'm sorry. Should I ask you for the fifth wave first?
Yeah, so I think the fifth wave, yeah, so let's do that.
Yeah, so the fifth wave is where AI gets a body.
Most of our explorations in AI up until now are sort of intelligence that's very abstract.
It's a software that's running inside a computer.
Somewhere it has sensors.
It doesn't have a physical body.
But AI that has a body, we call this robotics frequently, is something that's been very lagging.
If you think about robots today, they are super clumsy.
There's no robot that can just walk down the street.
Sometimes you see these crazy movies of a robot that does a backflip.
But the truth is it took them a thousand takes to make it work.
And it's not really something that you can buy or use in any real way yet.
So again, it's like telling the difference between a dog and a cat.
It's something that we take for granted.
We can move our hands in dexterous ways.
We can manipulate things.
We can walk.
But ability to control physical bodies is an incredibly difficult thing.
When you walk down the street and you're moving 300 muscles in perfect synchrony to
make that step, that's an incredible feat.
It's a lot harder than playing
chess, but we just don't appreciate it because we're so good at it. We don't even think about it.
And again, it's one of those things that we've been playing chess for thousands of years,
but we've been walking around for millions of years. And so that's why we're so good at it.
We don't even understand how difficult it is. That's very hard for machines, and nobody knows how to do that.
So this is a future wave because we don't know how to make a machine that can move around
and manipulate things gracefully.
I hope we'll solve it in the next decade or two.
But right now, it means that it's a lot easier to make an AI that replaces a radiologist
than it is to make an AI that replaces a plumber or a hairdresser.
Anybody, any job that involves working with your hands in an unstructured environment,
an electrician, a nurse, AI is not even close to replacing because we have not figured out
how to make machines that control their bodies in a graceful way.
So that's the fifth wave.
Long ways to go.
After we solve that, and we will, then machines can start introspecting into looking at their
own self because they'll have a self at that point.
And this is to me the end game, right?
This is this fascinating stage where we
have machines that begin not to model the external world, not to model fraudulent transactions in
cats and dogs and how to drive a car and all these useful but external things, and they start to
model themselves. And this is where the magic happens. When self-awareness, consciousness,
sentience, whatever you want to call it, all this magic that we have in our head is basically this,
we've taken all this ability to model the world outside, which we needed to survive to get to
this point in evolution, and we've turned it inside and we begin to model ourselves,
to model our internal thinking and to model how other people
are thinking. We do this all the time. And that is self-awareness. That's the magic.
And that's the point where I think machines will really gain consciousness. And again,
it's not going to be a black and white thing. It's not going to be a one day they wake up and
say, hello, I am your new master. It's none of that Hollywood stuff. It's going to be very gradual.
Lots of practical reasons that we want to do this, but also it's a very powerful,
risky technologies and lots of reasons that we should treat this with caution, but it is going to happen. And it's going to happen within the next century for sure. So is it going to be
20, 30 years from now? I don't know, but it's happening
around the corner, certainly in the lifetime of our children and our grandchildren.
What does it mean to you for computers or machines to have consciousness and be sentient?
It's the ultimate thing for several reasons. One is, what is sentience? And consciousness is almost
at the level of what is the origin of life, what
is the origin of the universe.
It's up there with seven grand questions.
People have philosophers, theologians, religious, cognitive science, you name it, people have
been grappling with this question of what is consciousness, self-awareness for millennia.
And in my humble opinion, not making a lot of progress on this
question, in part because we try to think about this in terms of humans, and humans are very,
very complex in terms of sentience. And it's impossible for us to understand ourselves.
It's almost like a paradox. But when we apply it to machines, we can do it. So to me, it's a very
grand question. And to be able to be an explorer on To me, it's a very grand question. To be able to be an explorer on
this frontier, it's like discovering a new world. It's like going to outer space. It's something
where no human has been before. There's many practical reasons also I want to do this
because I think that we are creating a lot of technology. We are relying on this technology. We're handing over our life
to this technology, to airplanes, to cars, to supply chains that feed us. But these systems
right now are very, very fragile. They cannot take care of themselves. It's as if we created
a lot of children, but we haven't educated them to the point where they can take care of themselves.
And we are creating more and more of this technology, but that technology needs to learn to take care of itself.
This is the only way we can have a resilient technological future.
You want your smart factories, your smart cities, you want these systems to be able to be minimally self-aware so they can take care of themselves, recover from damage, know when things
are going wrong without being told explicitly, figure out how to adapt to new situations without
having to have a human babysitter that reprograms them, everything, something small changes.
We need to move on to the next level of resiliency of these technological systems,
and that is through their self-awareness. This is
why evolution has given us this gift. It's evolutionary advantage. And we have to give
that gift to the technologies that we're developing. How do you think about machines and emotions?
Emotions really, I think, are prediction. It's all about predictions about yourself and others that you care about.
And it can be long-term predictions, short-term predictions. It can be predictions that are
positive or negative, but that's what emotions are. Everything from fear to love, I think,
boils down to predictions about oneself and one's immediate environment.
And I think, you know, it's not a very romantic notion, but it's still, if you think about
it that way, you can see that machines can have emotions as well.
It's going to be, once machines become self-aware, they can make predictions about themselves,
about other machines, theory of mind, as we call it in psychology, all these about other humans that
they're interacting with. And these will be emotions. They're not going to be the same kind
of emotions as we humans have. They're going to be different, but they're going to be emotions
nonetheless. And I think, again, there's very practical reasons why we have emotions. It's a
way for us to do this thing or not do that and sort of make value judgments about
possible actions.
That's what self-awareness is about, is about ability to foresee the future and to explore
possible paths, weigh actions and determine the consequence of those actions, sort of
simulating them in our mind and weighing possibilities.
And that's what emotions are, I think, at the end of the day,
and machines are no doubt going to have the same type of thing.
We have five senses plus the ability to think and feel. How limited are we as compared to what AI
will be? That's a fantastic question. You know, sometimes I get these questions. I think it's
somewhat of a human-centric arrogance
that people say, well, can a machine ever appreciate the sunset or the taste of chocolate
or something like that? The answer is, I think that machines will be able to sense the world
in far deeper ways than we can. Like you say, we have five senses. Machines can sense the world, can see in colors we cannot see.
They can hear in frequencies we cannot hear.
They can go places we cannot go.
I mean, so they're going to have notions of the world that we don't have words for,
that we cannot imagine.
And so they're going to experience the world in different ways. Because of that, I look forward to seeing what poetry they write, what feelings they
have.
I mean, this is really going to be eventually like meeting an alien species, but it's going
to come from inside.
It's going to be one that we create.
How do we oversee this AI and ensure that tech is used for good?
Yeah, so that's the inevitable question, right?
So long before we get these self-aware machines,
which is sort of the end game,
we're going to go through many decades
where this technology becomes increasingly powerful
and we're going to use it mostly for good,
like we have other technologies,
but there's going to be some bad players
using it for bad things. And no doubt that's going to use it mostly for good, like we have other technologies, but there are going to be some bad players using it for bad things.
And no doubt that's going to happen.
And we have to be cognizant of this.
And this is one of the reasons I think it's very important to do just what we're doing
now, is that everybody needs to understand how powerful this technology is getting and
how fast it's moving forward so that it doesn't remain sort of in secret labs being developed,
but everybody should realize it.
And we're already seeing how AI can be used to generate fake news, for example.
And if you're a journalist, you need to understand how that works and what it can do and cannot do.
How do we combat that?
You know, we can do some with regulation.
I think that's going to be an uphill battle for many reasons.
The other approach, being a technologist, is AI that watches over other AIs.
I think it's going to be a sort of arms race where you're going to have an app that tells you that
other thing that you're reading is probably fake. That image that you're looking at is fake
because you can't tell, but the other AI can tell. So we're going to have to use AI as tools to help us monitor each other. And it's
going to be an arms race. But again, to put things in perspective, people were very worried about
genetics when it was discovered in the 70s and the 50s. People were worried about chemistry when
it was discovered in the early 20th century that people are going to make horrible things.
By and large, we've used genetics
and chemistry to do good things, even though there's a very powerful technology. And the same
thing will happen with AI. Some people will do bad things, but by and far, we will use this
technology to do amazing new things. And it's really up to us.
You have a very vivid imagination. Can you describe your thoughts on some amazing examples of what this technology will enable?
I'm really thinking about, again, in the short term, you have everything from driverless
cars to medical diagnostics and better food or organics and better yield.
That's already going to, by itself, is going to transform huge areas of humanitarian concern
from healthcare to food supply chain
to resilient economies. This by itself to me is a major thing. But if you go beyond that,
you know, when machines can sort of help take care of people, especially elderly people,
I think that's an important area where I think we will increasingly struggle and we need machines to help, again,
in meaningful ways.
I think that's another area where robotics is going to play an increasingly important
role, basically helping us take care of ourselves in lots of different ways.
Is there anything else you'd like to discuss that you haven't already touched upon?
I think we covered probably more than I thought
we'd covered already. The message, I think, that I'm on a crusade really in delivering is that
everybody should understand this technology. There is this myth almost that it's mathy,
that it's technology, that only if you're a computer scientist, you need to understand AI,
or if you're engineering. But the reality is that it doesn't
matter what you're doing and where you are and what you're interested in. If you're interested
in humanitarian activities, you're interested in financial, in both, whatever it is that you
want to do, this technology worms its way into it. And the big challenge, I think, is that when AI
works, it becomes transparent. That is sort of the difficulty of appreciating how much is entangled in our world already.
So you have also made revolutionary discoveries in multi-material 3D printing.
Can you tell us about what that is going to lead to?
So 3D printing is another passion of mine.
And you might wonder, what does it have to do with AI and consciousness and all that stuff?
And the reason why I'm interested in 3D printing is because it's a way for robots to make robots.
It's a way for all this creativity to manifest itself into physical reality.
And so to me, a robot has a body.
Life creatures have a body and a brain.
The mind is the AI and all that stuff.
And where does the body come from?
We need a way to make these physical things and we need to allow computers to make physical
things and that's where 3D printing comes in.
So we've been trying to develop not just conventional 3D printing, but 3D printing that can print
a robot.
Basically, a complete working robot can walk out of the 3D printer, batteries included.
What does it take to make a printer that could basically give birth, if you like, to a fully
operational robot? So that's the reason I'm interested in 3D printing. And on the journey
of making that kind of 3D printer, we have developed all kinds of technologies that have spun out into companies or that might spin out into future companies like bioprinting, which is
printing with biology.
It's an area now that's spun out in the company that's making therapeutics and implants.
We're working now on food printing, for example, which is a new way to create nutritious meals on the fly
with raw ingredients in a completely software-driven way that hasn't spun out yet, but I think
it will.
I think there's lots of examples of how these 3D printers, again, will worm its way into
different areas.
And again, it's one of these exponential technologies that is growing rapidly.
But ultimately, it's a way for me to make robots that have all these integrated components in them.
What are the three key takeaways or insights you'd like to leave our audience with today?
So, I've been thinking a lot about what these three takeaways can be and how can I narrow it
down from all this craziness that seems to be going in the world
of AI? And I would say, first of all, that AI is already governing our life in a thousand different
ways, from predicting the stock market or pension plans to grading essays of job applicants to
pension plans, you name it. It's already everywhere, but you don't see it. It's transparent.
And once you understand how that this is happening,
it's like putting on this lens, these glasses,
and suddenly you see AI everywhere,
and you can see where AI could go.
And I think that's the differentiator
between companies that lead
and those that follow when it comes to AI.
Those that can see all the places
that AI exists and where it can go. So that's the first takeaway, that it's transparent,
but it's already everywhere, you just don't see it. The second, I think, big takeaway
is that it moves forward literally exponentially, that it doubles in its capacity every few months.
And that is happening because of data acceleration, because of computing power, because of the
size of the AI that is growing at the exponential rate, and because of AI teaching AI.
These exponential drivers mean that it starts slow and it accelerates, and it shows no sign
of slowing down.
That acceleration is really, really important to understand.
Unlike other technology, which sort of are here and they stay for a long time and you kind of learn, this is a moving target and it's accelerating.
And the third takeaway is, I don't know how to put it,
but we haven't seen anything yet.
We're just in the beginning of many steps.
There's still many steps to come because AI right
now cannot do certain things. It cannot manipulate bodies very well. We didn't talk about this,
but AI cannot have a conversation like we're having right now. There's no AI that can understand
natural language like that. So understanding all these things that AI cannot do is really
important. And these are two things AI cannot do is really important.
And these are two things it cannot do, but these are subject to future waves that I think are around the corner.
Thank you so much, Had, for our conversation today.
This has been fascinating.
Thank you.
It's been my pleasure.
If you enjoyed today's episode, you can listen or subscribe for free on Apple Podcasts or
wherever you listen.
If you would like to receive information on upcoming episodes, be sure to sign up for our newsletter at 3takeaways.com.
Or follow us on Twitter, Instagram, Facebook, and LinkedIn.
Note that 3takeaways.com is with the number 3.
3 is not spelled out.
For all social media and podcast links, go to 3takeaways.com.