Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 2x31: Taking Artificial Intelligence on the Road with Christophe Couvreur of Cerence
Episode Date: August 3, 2021Most people think AI in vehicles means autonomous driving, but there are a lot of other applications for the technology. Ever since Mercedes-Benz introduced Linguatronic voice response in the 1990s, v...ehicles have included verbal control and feedback mechanisms. In this episode, Christophe Couvreur discusses the lessons of bringing AI to vehicles based on his experience at Nuance spin-off, Cerence. As these systems have improved, they have reached the so-called uncanny valley, where people become frustrated by their limitations despite tremendous advancement over the last decade or so. Looking beyond voice response, we can see many driver assistance technologies added to vehicles in the future, and many of these will be ML powered as well. Three Questions: When will we see a full self-driving car that can drive anywhere, any time? How long will it take for a conversational AI to pass the Turing test and fool an average person? How small can ML get? Will we have ML-powered household appliances? Toys? Disposable devices? Guests and Hosts Christophe Couvreur, VP of Product at Cerence Inc. Connect with Christophe on LinkedIn here and the company page Cerence.com. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 8/3/2021 Tags: @CerenceInc, @SFoskett, @FredericVHaren
Transcript
Discussion (0)
Welcome to Utilizing AI, the podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics. Each episode brings experts in enterprise
technology together to discuss applications of AI inside and outside today's data center.
Today, we're discussing taking AI on the road, but not in the way that maybe you think.
First, let's meet our guest, Christophe Couvrault.
Hello, I'm a vice president product at CERENS. CERENS recently spun out from Nuance Communications
and we've been providing a voice assistant, virtual assistance to the automotive industry for the last 20 years.
Hi, my name is Frederik van Heren.
I am the founder of Hyfence, which is a consulting and services company
active in the HPC and AI markets.
You can find me on LinkedIn, and my Twitter ID is Frederik V. Heren.
And I'm Stephen Foskett, organizer of the Tech Field Day event series,
including the AI Field Day event, and also a host of Utilizing learning in things like industrial IoT and
automation. But of course, occasionally, we've also talked about autonomous driving because
autonomous vehicles are one of the primary, well, kind of high profile rock star applications for
machine learning. Ask almost anyone and they'll say that, you know, what is AI used for these
days? They'll say, yeah,
it's those self-driving cars. But of course, they haven't seen a lot of self-driving cars, and they
are kind of skeptical about them. But that being said, artificial intelligence generally, and
machine language in particular, are used in a lot of other applications in the vehicle. And in fact,
these applications are used all the time, whether it's a voice response system or an intelligent navigation system or something else.
You probably use this more often and, well, more practically than you might autonomous driving.
So, Christoph, it's great to have you here to talk about applications of artificial intelligence in vehicles that aren't
self-driving. Yeah, in fact, people don't realize it, but machine learning has been used for
virtual assistant, voice assistant, speech recognition. It went by many names since the
late 90s. It was always based on machine learning techniques, not the latest, not using
deep learning and neural networks as is commonly done today, but using more statistically based
techniques at the time, or whether it was still the same principle. Machine learning, collect data,
use that to teach the system, to train the system, to recognize some patterns and use those recognition results
to drive some actions.
And it started in 1998 when Mercedes offered something called Linguatronics in their S-class
flagship car where you could simply place a phone number, place a phone call by dictating
a phone number into the system. Could say call 91278 and
so on. Since then the technology has evolved. We now can do multiple things and entering addresses,
requesting some music to be played and all of this has been done with machine learning and is getting
better and better at it
and doing more and more things in the car with it.
Yeah, in fact, Mike, every car that I
have that was built in the last 10, 15 years
has interactive voice response system built into it
that can do things like you mentioned, controlling
entertainment, navigation systems, climate controls, all that sort of thing.
And this sort of application is a lot more practical than sort of the pie in the sky car.
Take me to the pizza restaurant or whatever.
I mean, it's much more practical, isn't it?
Yeah, I mean, the most common use case, what we see in our data, people doing with the system is placing phone calls.
I mean, we all place phone calls when we're in the car or entering destinations.
I mean, typing an address on a touchscreen or searching for a point of interest is pretty painful.
Being able to do it by voice, like find the nearest pizza place or get me to one wayside road
Burlington Massachusetts or call my mom this is extremely convenient and that's something that
has been around for some time and now is allowing you to do a more and more sophisticated type of
interactions with your car by voice in a safe way, eyes free, hands free.
So it's not about something fancy or showing off.
It's really about allowing people to interact naturally with their vehicle by
voice and in a safe way, eventually getting to some kind of a virtual co-pilot
that can sit next to you and assist you during your journey.
Yeah. So I did mention that, that AI has evolved really since the late 90s.
Certainly in voice, I mean, we have seen great advances.
Now, when you look at what you're doing today,
what do you consider your challenges today to bringing AI to inside the car,
which is significantly different than bringing AI to inside the car, which is significantly different
than bringing AI to outside the car?
I think there are a few specific challenges to operating in the car.
One of them is to deal with the pretty harsh environment.
It can be extremely noisy.
You can be in a convertible.
You can be in a traffic jam in Mumbai, surrounded by a lot of different noises.
You can be on a moped in Roma.
And yet you want the system to work seamlessly for you.
So despite the big advances, the conditions remain extremely challenging compared to a quiet living room, for instance.
The other thing that we need to accomplish is getting the system better integrated with the vehicle. Because if you, the only thing you want to do is placing a phone call, it's not that
difficult. But a real system should be smart enough to integrate with the driving conditions,
with the sensors of the cars.
It should know where the car is, what you're doing at that point in time, and adjust like a human
would do to your request in that context. For instance, why do we need to press a button
or explicitly request the car by some special command to listen to us? Why can't we simply, like we would with a human,
say, hey, can you get that done for me? And the system, if I have a passenger in my car
and at one point I ask somebody on the backseat to do something, it's clear to people. Can the
system have that level of naturalness of distinguishing between what is meant for them
from what is a natural conversation between people in the car.
So all of those aspects, I think that getting rid to the last mile, the system are getting
pretty good at purely recognizing what you say with the challenge of the noisy conditions I was
mentioning, but getting the full interaction to human levels of interaction is where the challenge is today.
So those challenges, are they data related?
Does that mean you would like to get more data so you can identify those?
Or is it more of a hardware challenge, getting GPU, so to speak, in the car to do more real time?
What do you think the issues are?
It's a combination of both.
Because the more data you have, the better you can train your system.
Better trained system often tend to be bigger.
If you have more data, you can train bigger systems.
But if the systems are bigger, they get more accurate, they get better.
But they're also more demanding in terms of the
computational power you need to run them. And even in the cloud, you sometimes hit the limits of what
a system can do, but a car has more limited computational capabilities today. It's powerful
by all standards. I mean, a car is more powerful than a smartphone, but it is not, it's not running a 20,000 GPUs just to,
to respond to your voice query.
Yeah, exactly. That's a,
that's exactly what I was going to talk about is,
is if you look at the self-driving car manufacturers, you know, they,
they're going to fill your trunk with hardware GPUs and storage.
Is that something, I mean,
I guess that's not the way you want to go right you want to you
want to have probably something more more simpler in the car that might communicate over 5g or so
the way we we we tend to think about it is in terms of locality of the data if your data has
to be in the car then you want to do the processing on that data
in the car. For instance, if your navigation system is in the car and you don't want your
navigation system to stop working when you lose your connectivity, when you go off the grid,
you're on a country trip, you lose your 5G connectivity, and then your system stops working.
No.
If your GPS data, your maps data is in the car, then you need the voice interaction for that to happen in the car.
If you want to control by voice, say, your airco system, why would it stop working?
Because you lose your network connectivity.
The airco is there in front of you. But at the same time,
if you need to request weather information and get a nicely generated weather report back to you,
it's perfectly okay not to get that info while you're not connected. I mean, people will
associate the information and the ability of the system to respond to the query to the connectivity
status. So what we believe is the future is those hybrid systems
that will do part of the processing locally,
especially when the data is local,
keep the processing, keep the AI where the data is,
and that can be on the edge or in the cloud.
And there've been a great deal of technology advances
as well in terms of processing in the
vehicle so as you said you we may not need a trunk full of gpus soon because we can have much much
smaller and more power efficient processors but you know one of the things that it occurs to me
though is that there is a great risk here of um i don't know over gadget defying i don't know if
that's a word, but making the car
too much of a voice control gadget. And I think that there has been some feedback or, I don't
know, backlash against sort of the, hey, assistant, tell me such and such kind of culture that we live
in now where everyone has a voice assistant listening to them. But frankly, some people
aren't all that satisfied with the response they're getting from this voice control system. I would think that that would be a problem
in vehicles because, of course, you don't want people to get really frustrated and upset with
your car. No, I mean, you want the system to be as natural as possible. So it should be seamless.
You should be able to interact with it the way you would interact with a human. It shouldn't be doing everything that a human can do. I mean, you should not ask your car for
sentimental advice, for relationship advice. That would be probably a bit too optimistic,
even if in some case that may be actually not necessarily a bad idea. But the the here it's about setting the right expectations and avoiding something known
as the uncanny valley in AI. The uncanny valley means is something that was discovered in the 70s
by Japanese researcher in artificial intelligence. And what they found is that as the systems would
get closer to human behavior, the affinity of people for it would increase and at one point
drop before going back up.
So you would expect that people would become more, relate better to a system as it gets
smarter.
But that's not what you see.
You see that as you get smarter, people start to like the system more.
But you have that gap, that uncanny valley where you're almost human, but not yet.
And instead of perceiving the system as a smart computer, people start perceiving it as a poorly performing human. And that's a bit, I think, what we have been eating with voice assistants.
They are getting uncannily good at a few things,
but they are not human level.
And that lead to these perceptions
that they are somewhere off, somewhere wrong.
And that's really what we hope
that the next generation of technologies
and computational systems will actually allow the system
to jump that last limit.
And a good example of that is the voice of the system,
the text-to-speech voice of the system,
which for a long time was very robotic, very synthetic,
then became almost human, but with still those little distortions,
glitches that gave it away, and that at least to me sounded more annoying
than the purely robotic
version of the past and the latest generation of speech synthesis is actually almost undistinguishable
from human speech so as past in my view that uncanny valley and it's now close enough that it
can pretend to be human comfortably and the rest of the system have to go to the same evolution
to get the full interaction to the same human level
of satisfaction that you would get to pass that uncanny valley
that we often end up in today.
Yeah, I agree with that.
I think the, well, first of all,
I think the people accepting voice as a way to communicate,
of course, is defined by the quality and the speech synthesis has made very good progress
and sounds very natural.
Even every time I think it can be better, it's actually improving.
But I think it's also depending on the generation, right?
It's younger people are much, much, much more willing to try things than people that didn't necessarily grow up with voice.
And I think voice is, of course, is a risk in the car.
But I think the dashboard by itself, you know, the using AI and the dashboard is a bigger concern to me. I mean, if you think about, there was a, what was it?
A video a couple of years ago where a Jeep was being hacked
and they turned up the radio and disabled the brakes and so on.
I mean, I think AI in the car, definitely there are some serious implications.
But I think overall, it's part of AI.
There's always concerns about AI. The question
is, how do we respond to it? And what do we do about it? Yeah, I think AI will never be perfect.
So AI is just like every human artifact. It has defects, it has pros and cons. And it's a matter of using it for what it is good at and not setting unrealistic
expectations. We see that also with autonomous driving today where level three autonomy where
the car would drive itself under the supervision of a human the main issue is not the self-driving
part the main issue is the handover back to the human. When the car has to pass
on the control back to the human, that's where the biggest problems occur today. And this
is really this interaction bit between the humans and the system that has to be designed
to be robust to common human imperfections also. And we tend as engineers to design beautiful systems,
assuming that everybody will be disciplined and will know how to operate them. But that's not
the case. You need to design the system to be robust. And speech systems, like any other
systems, have to deal with the way people naturally interact, even if that's not the
logical way of interacting.
It occurs to me that another challenge is, of course, as we've said, the vehicle is a very adverse environment, not just for microphones, but also for computing systems generally.
I had to replace the hard drive in my car's dashboard computer, which is pretty weird. But of course, also, they last a
long time as well, especially some of these vehicles that you're mentioning. Those Mercedes
cars from the 1990s are probably still on the road. And we have to think about the longevity
of the technology and how this technology can continue to integrate with modern systems going forward.
I think a lot of those old cars have all sorts of systems that are no longer compatible with electronics on the road today.
Yep. And that's going to be one of the big challenges we're going to face in the future.
Cars are getting connected. They have over-the-air update capabilities.
At least all newer cars will have, they have over-the-air update capabilities, at least all
newer cars will have, do have or will have it, and that will allow them to be continuously updated,
but only as long as their manufacturers decide to do so. If you have a five-year-old iPhone,
even if Apple is great at supporting all their hardware, you don't get a lot of updates
anymore. So the challenge is that not that many people have five years old phones, but many people
have five year old cars. So the expectations in terms of the level of maintenance that those
systems will require are tremendous. It's not just about storing replacement hardware that you can dispatch it to the garage
when you have a problem with your hard drive. It's making sure the software gets continuously updated,
refreshed, and is kept up to date with the latest regulations and ways of operating.
And frankly, the history of that has not been very positive. You know, I have a 2008 car that shows the wrong time on the dashboard clock because it's synchronized with GPS and there's a GPS overflow bug and they're not fixing it.
So the clock is wrong and can't be fixed.
You know, I've got another car that, you know, frankly, there's a problem with the navigation system that can't be fixed and won't be fixed
because the manufacturer threw up their hands and has given up on it. How do you reassure someone
when you're talking about bringing artificial intelligence into the dashboard that this isn't
going to happen to them? I think there are several things. One is that AI remains very segmented. So there are fail-safe mechanism.
There are piece of the system that are protected.
So if you look like even in a car that would have,
let's take an example.
You may say, why do I need a different operating
system for the head unit, the screen on the central console,
and for the display, the instrument panel.
It's also an LCD screen today.
Is it two screens connected to the same unit?
Often not, because the requirements on the part that will be displayed in front of you,
the part that will display the speed,
is much more stringent than what you have
on the media player on the central console.
So what you may have is actually one single system with an hypervisor and a virtual machine,
one running a very hard real-time operating system, very protected that will take care
of the mission-critical functions like displaying the speed, and another one that will be, well,
let's reboot it when things go wrong type of operating system like Android or more classical Linux.
It seems like the new model to buy a car is to lease, not to buy anymore, where leasing makes more sense than anything else.
I mean, there is in the news the new movement, you know, the right to fix or the right to right to repair
right to repair right where where there's this concept where like you know you brought up the
apple iphones right so after a while they stop supporting the model so what do you do with it
right it's it's it's usable but it's it's not desired um so or you have a trade-in model which
is kind of crazy for a car but i mean do, do you see the car industry then moving towards this direction
where they, quote unquote, could replace the dashboard on an older model
saying, here's a new dashboard with the upgraded technology?
Or does that not make financially sense?
Not that much in the sense of replacing the hardware.
It's getting more
feasible technically, but it is not yet the direction that industry is taking. What the industry is definitely looking at is replacing the software, upgrading the software. And if you have
purchased the car recently, you may have seen at one point that your dashboard looks different than it was a few days before after you were
asked if you did not mind downloading the latest version.
And that's something that you will
see happening more and more.
So it's less an hardware update than it's going
to be about updating the software.
Right.
But that means also that the OEMs will sometime tend,
a bit like what you see today with cell phone makers,
or they will over dimension.
That is, you may get a car that has more capabilities
than the software supports.
And over time, those capabilities will become enabled.
Right, this feature creep, right?
They keep on adding more features because people want that
and it works fine in the new cars with the new hardware
and the updated hardware.
But those added features, you might not be interested in them,
but they're still consuming the disk space and CPUs.
I mean, it's a whole new view on the technology, right?
If I look at the Tesla,
Tesla doesn't have a dashboard per se,
you know, it has a screen, right?
But it's still,
the Tesla still has this architecture
with a mission critical part running on one virtual machine
and the other part that is more flexible.
And Tesla was a pioneer, but other OEMs are still looking at the same way of enabling
more features over time.
So you may get Teslas as cameras for a long time, but they still don't have a real self-driving
capability.
They've autopilot, which is a partial self-driving, but no full self-driving capabilities.
But all cars have been equipped with cameras,
and over time, that hardware gets used to some effect.
But, yeah, go ahead.
I was going to ask, do you feel that with your technology,
you're pushing the car manufacturers to innovate,
or is it the other way around? Is it the car manufacturer wanting more and pushing companies like yourself to kind of innovate and
bring new products to the market yeah the the camera factors want to differentiate so they they
all try to innovate and especially the premium brands that want to offer something that the others don't.
And then the mainstream brands that want to replicate what the premium brands are offering
and to offer it to also to their customers. If you think of ABS brakes, which have been a huge
improvement in safety, they were launched by Audi in the 70s as a premium feature on their high-end models.
Now they're mandatory on all cars.
So they have propagated through the model lineups over the years.
And we are going to see the same thing with AI features.
Now, some AI features will be gadgets.
They will go away.
Some AI features will be gadgets. They will go away. Some AI features will found to be valuable in terms of increasing safety, of increasing the enjoyment that people
have in their car, bringing something helpful to them. And those will spread and become
widespread. And as they do, they'll become cheaper and also more reliable.
Yeah, I'd like to focus on that for a few minutes if we can and talk about, so we've spent a lot of time talking about voice response as an example of taking AI on the road. And of course, we've
talked about autonomous driving, but there are a lot of other things that machine learning could do
in a vehicle. And when we were talking previously before recording, a lot of these other ideas came up.
So, for example, you mentioned situational awareness in terms of voice response.
But that's also important for other controls of the car.
You know, I was saying like autonomous windshield wipers and things like that.
All of that could be ML powered as well.
Yeah. So machine learning can do simple things for you. It can do more complex things for you.
One area that sees a lot of developments lately is advanced driver assistance,
like depart your lane warning if you're getting out of your lane, reminders about the speed signs, detection of various road signs,
and sending you maybe a note.
I noticed that you have not marked a full stop at the last three stop signs.
Maybe you want to pay attention to that in case there is a police car,
the next one.
So this could be the kind of things that AI will and machine learning will make possible.
Inside the car, we have cameras.
So you have more and more cameras inside the car.
And they will become mandatory pretty soon because regulations that promote safety.
And those cameras are there to monitor the driver to detect drowsiness.
And with those cameras detecting the drowsiness,
you can do a lot of extra things.
You can detect the driver is holding a phone to their ear
instead of keeping their hands on the wheels.
You can detect driver that may be smoking,
even if they have a rental car
that is marked as non-smoking.
And a few other things like that.
So those systems will enable a lot more.
They can also be used, for instance, to identify that your son is in the car with your keys. So
maybe you should dial back the horsepower of your Porsche back to 90. Now, how does that work? I mean, if I'm a rental car and I rent the car to somebody and they're smoking, is this like a post-mortem where after the facts, when they bring the car back there's the the engineer's question for us how do
you do it and with the camera in real time you can detect what passengers it's called cabin
monitoring are doing you can detect if they are smoking you can detect if they are holding a phone
you can detect if they are watching in the back instead of watching in front of them so all of
those things you can detect a dog that may have been forgotten on the backseat.
So all of those things you can detect in real time
with the camera, then what do you do with it?
Do you simply have a little gentle reminder
to the driver about it?
Do you flag that to the agency?
And do they call you in to ask you to pull over
and leave the car there?
I mean, all of that is something that can be configured.
I mean, the AI capabilities will provide the information.
Or do we use or abuse the information?
It's more a human decision then.
I was going to ask about, we talked about AI in the car.
Are you working on AI in a network of cars? I mean, is there any interest to communicate with other cars regarding to traffic or anything like that? Do you see it more like a network or is it just within a car and it stays within the car? this and think there are multiple things there. You can use multiple cars to aggregate information.
So, of machine learning, you may then be able to do some learning from a collection of cars.
A great example of that is traffic. I mean, traffic data comes from aggregating the
trajectories of many different cars. And that's what allows you to make traffic predictions,
tell you that there's an expected 10 minutes delay
on that road versus your original travel plans.
And this data can be aggregated.
But what you can also do is still directly
one by one aggregated together.
Then you can start to have a vehicle to vehicle connectivity. You can have
vehicle to infrastructure connections. And a lot of those things are getting more attention those
days. It's as much about infrastructure to enable those connections as about the AI to exploit the
data that comes out of it. So an example of that is there are some new standards being proposed that allow
vehicles to communicate with infrastructures that when you approach traffic lights,
the car would be informed of the traffic light patterns of red and green and then could self
adjust the speed so as not to have to brake at the following lights. I mean, if you've been traveling to Germany,
you know that there is something like that,
assuming that you drive a perfect 50 kilometers per hour.
But why does it have to be 50?
If your car is aware of how many cars
there are on the street in front of it,
at what speed they move,
when the red will turn green and vice versa,
then the car can
predict what is the best motion pattern to hit that sweet spot, not having to brake,
not having to accelerate, and reducing the fuel consumption or electricity consumption
as a consequence.
Yeah, and that's actually widespread.
They're implementing it here in Northeast Ohio, where we're sitting as well, so that if you drive the speed limit all the way through town, you'll be able to hit all the lights green.
And actually, that's a wonderful feature because it encourages people to obey traffic laws and speed limits because they know that they will have a less congested and a less aggravated drive.
And I think that that actually may be a brighter spot
for a lot of these machine learning features. You know, I hate to say it, but I kind of don't want
my car to yell at me for looking in the wrong direction or, you know, doing something it wishes
I wouldn't do. But I really do want it to do things like remember to turn the headlights on at night, even if I don't remember
it myself, or to help me to, you know, get better fuel economy or, you know, drive in a safer way
or alert me to some problem up ahead. Those are things that I do want it to do.
Yep. And that's definitely one of the directions we see. I mean, the driver or the user of the car should be the final decision makers.
If you don't want those warnings, if you want to disable them, well, the system should be smart enough to figure that out.
And even if it's not smart enough, you should be empowered to stop it.
Because the driver is always the final decision maker in those situations.
Right. I mean, I'm really a fan of it for the simple reason you know that i'm that i'm trying to be cautious in in traffic but most
of the traffic side traffic accidents i know are you know people that are intoxicated or really
weren't paying attention and it's it's i would feel a lot more more secure if if cars would help those
people right that when they get behind the wheel and they think they're really driving straight
that it's really not not not really the case at all it's it's there's pros and cons i think it
just evolves and just like with anything else it's technology and innovation um and it's interesting
to see how it goes, right?
And to Stephen's point, it's not just about voice.
There's a lot of other things going on within the car that can be used with AI that helps.
And it's definitely also a political choice, a societal choice.
How much freedom, I mean, AI is just a tool.
At one point you can apply it in a limited way or you can abuse it.
Different countries may make different choices about what they will let the
AIs do or not do.
What if the AI can detect that you're intoxicated?
Should it still let you drive under your own warn you,
but let you drive if you decide to do it anyway or should
it forbid you from doing it that that's that's not a technical question at that point that's a
that's a political or societal question yeah exactly and it's not even a question for the
manufacturers and i think that that's another thing that we have to consider as well so this
honestly we could talk about this for a long time, but we're actually running up against the end of our window
here.
So before we finish, I do want to do our traditional thing
here on Utilizing AI, which is we've
got three questions for you that we have not
warned Christoph about.
He doesn't know what questions we're going to ask,
but of course, we kind of pick them
to match the topics that come up.
And I got to tell you, a couple of these questions came up in our conversation. So I'm
going to go ahead and ask. Christophe, when will we see a full self-driving car that can drive
anywhere at any time? If I'm honest, my response is never. I think anywhere at any time with the full flexibility of a human,
not in my lifetime and probably not in the lifetime of my kids.
So I think self-driving cars will be able to drive
under a broad range of conditions, in a broad range of places, but anywhere, anytime, like in a flooded subway
section, no. I think this is not going to happen because those systems will not be really smart.
They will react to what they've been trained on. And that's what humans still outperform any machine today,
is dealing with the really unexpected.
Yeah, I can think of some snowy evenings that there's no way an autonomous vehicle
would have been able to drive home.
So, all right.
Number two, another thing that came up in our conversation
was convincing people that they're talking to a person
instead of an AI.
So when will we have conversational AI that, you know, voice conversational AI that can pass the Turing test
and fool the average person into thinking they're talking to another person?
I think that there are such AIs today that achieve that, but they cheat. So there was a famous example of a machine
that passed the Turing test
by pretending to be a 12-year-old Russian boy
with poor English.
And people couldn't actually tell
it was actually a pretty dumb computer
because of that little trick.
Having a machine that can pretend to be
convincingly human to most people
on a broad range of topic, I would say probably five to ten years from now you can be in a chat
room and have someone you cannot tell from a chat bot on the topics of the chat room.
No, having somebody, having a computer you could take on a date
and talk about anything, that's still much further out.
I don't want to go on that date.
Definitely not.
All right, finally, we've been talking
about taking AI on the road.
How about taking it in other places?
How small can we get ML?
Can you imagine disposable machine learning
or machine learning powered toys?
Or we talked about like condiment jars
or shelves in a grocery store.
How small and cheap will AI get?
I mean, in time, more slow continue.
So the processors keep getting cheaper.
Either you can get more power for the same price
or you can get the same power for a lot less money.
So if I look at it today,
what would have been considered
to be extremely high-end speech recognition,
my field that I know best 20 years ago,
you can get for a few cents on a chip today
that you can embed in a remote control or in a toy.
So this is already there today.
There is some level of machine learning
available in chips that cost cents or even
fraction of a cent today.
So this is there.
Now, the most advanced stuff, the stuff
you see at Google I.O. conferences or at the GTC,
yeah, those still require extremely powerful computers to run on. And it will take a while
before you can get that at a much cheaper operating point. So I think that everything
will become cheap and disposable, and you'll be able to have a conversation with your salt shaker
about how much salt you should put on your fries.
Yeah.
Cool.
I didn't say when, but I'm sure it will happen.
All right.
Well, thank you so much
for that strangely dystopian salt shaker vision.
I appreciate it.
And thank you so much for the conversation today.
Christophe, where can people connect with you
and continue the discussion on AI and other topics?
Yeah, so they can reach me on LinkedIn
at Christophe Couvreur, same, my regular name.
And they can also find some information
on the serence.com blog,
where I sometime post
and other people of the company
also will post articles that could be interesting
for people interested in AI.
How about you Fred, what's new with you?
Yeah, so consulting AI HPC,
heavily focusing on data management
and more specifically, you know,
helping enterprises understand the data they have and organizing and managing their data in a scalable fashion, you know, think
many petabytes.
And I can be found on LinkedIn under Frederick Van Heron and on Twitter as Frederick V. Heron.
And you can find me on most social media sites at S Foskett.
I do the Utilizing AI podcast weekly, and we will also be taking the podcast
on a little break here for a little while, and we will return in September of 2021 with the
season three of Utilizing AI after our summer vacation. So thank you very much, Frederick.
Christoph, thank you for joining us. Also, I'm going to announce my brand
new company, AI-powered salt shakers and ketchup packets. It's going to take the world by storm.
I guarantee it. That's what I'm going to be working on on the break. But in the meantime,
there are plenty of episodes of Utilizing AI if you missed them. Please do give us a rating on
iTunes, a little salt and pepper never hurt, and share the show with your friends.
This podcast is brought to you by gestaltit.com, your home for IT coverage from across the
enterprise. But for show notes and more episodes, you can go to utilizing-ai.com. Find us on Twitter
at utilizing underscore AI. And we'll see you for season three weekly coming in September. Take care.