SemiWiki.com - Podcast EP286: The Significant Impact of Ambient Scientific Technology on AI Deployment with GP Singh
Episode Date: May 9, 2025Dan is joined by GP Singh, CEO of Ambient Scientific. With over 20 years of experience, GP has played a pivotal role in shaping the industry, driving 50+ chip tapeouts, including game-changing advance...ments at FinFET technology nodes. Now, as the CEO of Ambient Scientific, GP brings together hands-on engineering expertise and… Read More
Transcript
Discussion (0)
Hello, my name is Daniel Nennie, founder of SemiWiki, the open forum for semiconductor
professionals. Welcome to the Semiconductor Insiders podcast series.
My guest today is GP Singh, CEO of Ambient Scientific. With over 20 years of experience,
GP has played a pivotal role in shaping the industry,
driving 50 plus chip tapeouts, including game changing advancements at SinFET technology nodes.
Now as the CEO of Ambient Scientific, GP brings together hands-on engineering expertise
and visionary leadership with more than 50 patents and five publications to his name.
Welcome to the podcast GP.
Thank you very much Daniel. Thank you for inviting me. It's an honor. Yeah it's
great to meet you. First can you tell us what brought you to semiconductors?
So Daniel right from beginning from my engineering college days, I was extremely interested in hardware
in general.
And that led me to computer design, computer system design in my first job that I did for
two years.
And then that led to the core of computer, which is the chip and in particular microprocessors.
So a couple of jobs down the road, I ended up working for a company called Sun Microsystem, a great company,
and where I got the chance to work on UltraSpark microprocessors.
And since then, I got hooked to it. So that's my story about getting me in semiconductor. Yeah that's very interesting.
Why did you found ambient scientific? What's the value proposition here? So
Daniel the ambient scientific story is actually far more personal to me. After
Sun Microsystem I worked for a company called Wave Computing.
Wave Computing was working on an AI chip and the competition was Nvidia.
And we were able to develop a pretty competitive chip. While working there I had a personal tragedy with me,
one of my dear family member, actually my own father, he slipped at his home
and he fell and most probably hit his head and there was nobody around to
attend to him, you know know so he couldn't survive by
the time anybody reached there he was already taking his last breath so that
was a personal shock to me and I went around looking for a device at that time
what I wanted to do is that whatever accidental had happened to they should
not to my father should not
happen to anybody else in family that was my thinking should not happen to my
mother or my uncles my anybody for that matter in my family so I wanted to build
an intelligent device at Daniel that could inform or that could detect if
somebody falls or somebody has more severe problems like getting attacked
or getting asthma attack and things like that. So I went around looking for a device that
could do so. All the devices available this is end of 2017 all the devices available at
that time they either needed to be pushed.
I mean, they were all big sized devices that need to be pushed, that need to be put into
your purse or in your pocket.
And if you get into some kind of trouble, you push a button.
But then when I started thinking about it, if I was to give it to my mother, for example, the device had,
you know, if some person has fallen, then the probability of that person pushing a device
or pushing a button can be very low.
So this device had to be automatically triggered.
Also there were expensive devices that were coming up in market at that time, but they needed
charging every day or every two days to keep functioning.
I imagine that in my case that would not work because any device that needs to be charged,
especially in Delhi people don't get time or they don't have enthusiasm to keep charging
them. So there had to be a device that could detect these problems
that can be born with the person can be also always available with the person and
it can run for a very long time on a very small battery. So a very portable, very long lasting
very portable, very long lasting battery life device that is intelligent. Now because I was working for a company that was doing AI chip, I knew how AI
works and I did know that this could solve the AI problem. So as I said I
went around looking for such a device but such device was not available so we
started this company to start
designing a very low power intelligent chip very low power AI chip that could
be worn as a pendant Daniel I'm happy to say that after six years of very hard
work we have that pendant today and very soon I should be gifting this to my
family very soon that will be capableifting this to my family very soon.
That will be capable of doing this type of functions, detecting problems, detecting fall, detecting asthma attack, detecting somebody getting attacked automatically without any prompting.
So that is how we started and now when we started doing so, designing a chip or building a chip, Daniel
takes a lot of money. That means we have to build a company that is viable business-wise.
So we started to build a microprocessor, which will be very similar to a GPU in capability, but consumes very less power.
So much so that I can build an AI chip
that can be worn as a pendant.
So we ended up developing this technology,
and now I'm glad to say that we do have this technology
that can scale to very small device,
and then can also scale to a very large device and we have a very viable
next generation of what you will call AI computing or next generation of GPU technology that we have
developed and we will not only solve my personal problem but hopefully solve a bigger problem of power consumption in
intelligent computing or AI computing in general. Yeah very interesting story. So
let's talk about your recent press release. Can you tell us more about it?
It's called S.P.E.R.S.E. AI module. The name is actually a Sanskrit or Hindi word which means touch Daniel and this
module is is that module that I'm talking about that to be the core compute
module within pendant use this module uses multiple sensors to detect a fall
activity and we are already working on the software or
the algorithms that will also detect other activities for human activities
like walking, like sitting, like being idle, like running, falling. Again we
talked about asthma attack, we talked about the fall detection. This module has all this capability.
And Daniel, actually, we are finding
very interesting applications of this pendant
that we had ourselves not imagined.
And our customers are able to build their own software
using this module for their own applications.
So think of this as being your intelligent personal assistant
that can help you in many of the emergency conditions.
It can help human being, but also
can help machines detect predictive maintenance
or can do some other advanced AI computing work.
Interesting.
So how is it so low power?
What applications can be developed using this?
So, Daniel, making, you may already know
that making AI low power is actually
almost like a paradox.
AI computing takes more power than non AI computing or
general computing. So we had to develop the technology at very deep layers. We
went all the way down and then yield to inventing our own circuits that
are able to do the core mathematic functions that AI computing required. Those
are called matrix computing. Then along with that, we solved two other problems, which
will be the data transfer and operand transfer. And we solved these, Daniel, at the core circuit level means the circuit block themselves are able to perform almost
95 99 percent of these functions without needing help from the software.
Now we take this technology that can do lots of this what is called matrix computing and
then put our own instruction set on top of that and build some generic computing
or what you will call generic SIMD computer,
single instruction, multiple data computer.
We build that together and we end up building a core.
This core is very much like a GPU core
and it can run instructions.
What it means is this one core can be an independent AI
computer all by itself.
And it can implement any algorithms
that an application designer wants to build.
And then we can replicate these cores too many times
to build larger and larger device.
Like for now we
have a 10 core device we call it GPX 10 which is basically 10 AI course
bolted with some the other generic CPU course and with other sensor fusion
structures to make it an SOC for what you will call intelligent IoT
applications.
We have some example applications already.
We already talked about human activity recognition.
We have voice detection, keyword phrasing,
or the keyword spotting applications,
phrase detection, natural language processing.
We also have what you'll call predictive maintenance
for machines. There
is a customer that is building production assembly line equipment which will detect
if there is any fault in the production line or there is fault in the item or the mechanical
item that is being built. In this case, for example, for wire bonding of electronic chip.
So they're able to do so using this chip.
The point I'm trying to say that it's a programmable chip
and it can build many applications.
Some of them are being provided to customers by Ambient,
by our team, but the customers
can also build their own applications we
have a complete toolchain which will allow the customers to build their own
applications one of the features of our first chip is you can connect many
sensors and here we can connect up to 20 different sensors so we can do very
intelligent sensing using our chip and be used in monitoring health,
monitoring activities or detecting activities,
monitoring machines, monitoring environment.
Actually, one of the customers just purchased a development kit and
they're monitoring birds using our chip.
Very interesting. So can we just take this up a level?
Can you tell me how is it different
from the various Edge AI and MPU chipsets
that we see on the market today?
So there's a very good question, Daniel,
because there are a lot of people
who claim they are building Edge AI solutions,
they're building either in software or building in chips.
Now, in most of the cases, you will
see that there is an Edge AI chip that
has been designed for a particular application.
So let's say there is an Edge AI chip
that can be used for detecting voice, detecting a phrase or detecting a wake
word. It can work for sound but it cannot work for any other type of sensors.
Some of the chips can work for two or three different type of sensors but they
don't have as much programming programmability. They work, they're only
designed for application.
Then there are other types of chips Daniel that are programmable, wherein the user can
design their own application using these programmable chips, but then those chips consume a lot
more power.
So in the market, you will see a lot of AGI solutions, either they have limited capability or they have they consume a lot
of power.
To best of my knowledge we are probably the only chip or only solution available Daniel
that can that has the programmability but the power consumption so low that this can
enable battery operated devices.
When you take our core and multiply it many times in future, we actually are working on
such a project right now, but in future you will be able to do as much computing in a
laptop or in just a tablet form factor Daniel that today you require a complete big
server or even chassis to perform. So we build a very efficient and yet very
flexible programmable chip or programmable technology. That's how we are
very different from most of our competitors. Okay so what is the core technology or the
architecture invented by ambient? You know how were you able to crack the
analog computing when so many others have failed? It's kind of a very good
question first of all rather difficult to answer for me because you know when we try to answer this question we
start sounding arrogant Daniel so forgive me in advance but yes there have been other companies
who have tried working on this technology this so-called analog AI computing technology.
In our case Daniel we okay so let's first understand the problem that we have with the
analog AI technology that most companies face or most of our competitors who try such things
faced.
Analog can be very difficult to design, number one.
Even after being designed, once the chips are manufactured, they can have unreliable results.
So our team and me, through the experience
that we have had in the past, we understood
this boundary of analog or the limitations of analog very
well.
So we have modified some very sensitive parts of analog Daniel to make them
Digital so in other words, we are kind of developed a hybrid technology that is between analog and digital
It's not entirely analog not entirely digital. It takes a very middle ground
but in doing so we have been able to preserve
the 80% of advantage of analog, that
is the efficiency of analog.
But we do not suffer from the reliability
or some of the variation problems
that the analog design does.
So taking this hybrid approach and knowing exactly
where the boundary of what works and what does not work
is has given us this edge where we have been able to do so and some of our
competitors or colleagues may not have succeeded. Okay and what kind of products
can be created using your technology and does it scale all the way to larger
chips? Yes sir Daniel, it does scale all the way to larger chips? Yes, Daniel, it does scale all the way to
larger chips. We are already working on products in 12 nanometer and also in 5 nanometer and we
already have signed up projects all the way going to 2000 cores or it'll even to bigger course.
Daniel if you know if you allow me I would like to say or I would like to put
forth that we we think we have the solution that will provide the roadmap
for AI computing for next one and a half to two decades.
This is my thinking.
Because as we go, you can see that we are already
facing power consumption problems so severe
that there is actually a heat wall or the cooling wall,
so to say. We are not
able to cool these chips or the GPU based or chips that are there in the
market. So our technology would be able to provide the roadmap for next as I
said one and a half to two decades. I think there may be some other
architectures that, you know, that are there that might be as efficient but I
don't know about them so far. So we think we are pretty much the most viable
solution to provide this roadmap till the quantum computing or some other
computing becomes viable.
So how does it solve all of the current problems for data center AI compute and
you know what are the benefits are we already working on a data center chip is your team
working on a data center chip? Currently we are not working on a data center chip but we are working on a chip that will go into edge
servers or the edge compute devices and but it will provide more performance
than a data center chip does today but then again we are talking about two to
three years from now when most of this data are a big portion of the data center AI computing
Daniel would have moved to edge anyway so this is an edge data center capable
device but we think by the time we reach there in next two to three years the edge
computing would have come or a big part of edge data center computing would
have come out of data center into the edge anyway so what I'm going to say is
we are working on a chip that will be deployed in edge in next two to three
years and while it is capable of working in data center we will find a much better
application and we'll find much better
usability of the chip in the edge itself. We'll have larger chips to go into data
centers. Okay last question GP any predictions for the future of the
semiconductor and the compute industry you, how do you see this unfolding over the next five or ten years?
So I personally think, Daniel, that in semiconductor, the fundamental innovations that we as an
industry have not done is in material science, mostly in material science. That means only the new material
will make a very dramatic change in the computing industry. For next two decades or even three
decades my personal prediction is that we would still be relying pretty much on CMOS.
As back, I think as four or five years ago,
I had written an article in LinkedIn, Daniel,
that said that we would keep scaling the CMOS technology,
going all the way to around 0.1 nanometer.
I still believe that will happen before the quantum computing
really comes, if it comes. Well, it was pleasure meeting you and thank you for your time.
Thank you very much, Daniel. I very much enjoyed talking to you and it's an honor to be here.
and it's an honor to be here.
That concludes our podcast. Thank you all for listening and have a great day.