Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x18: AI Everywhere, Even in Surprising Places with BrainChip
Episode Date: January 18, 2022BrainChip's neuromorphic AI technology has long been the talk of the industry, and now the Akida processor is available for purchase. We invited Rob Telson, VP of Worldwide Sales for BrainChip, to ret...urn to the Utilizing AI podcast to give Chris Grundemann and Stephen Foskett an update on the Akida processor. As of today, Akida is available for use by developers and hobbyists to explore neuromorphic compute at the edge. BrainChip enables five sensor modalities: Vision, hearing, touch, olfactory, and taste. BrainChip's architecture allows incremental on-chip learning at extremely low power, potentially bringing this capability to some surprising places, from home appliances to the factory floor. Another differentiator of the BrainChip solution is its event-based architecture, which can trigger based on events rather than sending a continual stream of data. As of today, the BrainChip Akida AKD1000 PCIe development board is available for purchase so everyone can try out the technology. Three Questions: Chris: When will we see a full self-driving car that can drive anywhere, any time? Stephen: Are there any jobs that will be completely eliminated by AI in the next five years? Girard Kavelines: What is it that scares you about AI in today's industry? Links: shop.brainchipinc.com Gests and Hosts Rob Telson, Vice President, Worldwide Sales, BrainChip. You can find out more information at shop.brainchip.com. For questions you can email sales@brainchip.com. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 1/18/2022 Tags: @BrainChip_inc, @SFoskett, @ChrisGrundemann
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Chris Grundemann.
And this is the Utilizing AI podcast.
Welcome to another episode of Utilizing AI,
a podcast about enterprise applications for machine learning,
deep learning, data science, and other artificial intelligence topics.
Chris, a few months ago, we had a company on the podcast
that got a lot of attention, and that company is BrainChip. Do you remember that episode?
I do. I also remember a couple of AI field days where BrainChip really stole the show as well.
Really, really interesting stuff here, basically taking burning chips to mimic neural nets.
So doing that in hardware, which is really interesting.
Yeah. And the other thing that was interesting is that they're doing it on a very, very small scale so that literally that these chips could be almost anywhere.
We talked about all sorts of crazy applications of them, and we saw them doing learning in real time.
But the big question that people had at the time is, when can I get my hands on this thing?
Is this just vaporware? Is this just a promise? When can I get a real one?
And so what we wanted to do was invite on somebody from BrainChip to answer that question definitively.
And so that's why we've invited
Rob Telson as a guest. Rob, nice to have you. Well, guys, thank you for having me. I'm really
looking forward to the conversation today. So Rob, give us a little background. Who are you
and where do you stand with Brainship? Good question. So I'm responsible for sales and
marketing. And I've been with Brainship for a little over a year and a half, and I've seen this company just flourish as we've taken our technology, which is a neuromorphic technology.
It's a bit different than what the traditional deep learning accelerator looks like or GPUs and the traditional AI architecture of today. And what we've done is we've been able to take this technology and really
apply it to the edge AI inference market,
which is a, you know,
has a $46 billion potential as of today.
And we see that actually growing.
So my job right now is,
has been to,
to get the sales machine moving in the right direction,
establishing some marketing momentum and see BrainChip move to this next level from
product development to commercialization. Yeah. And that was really important because like I said,
a lot of people, when they heard about BrainChip said, this thing sounds phenomenal.
And then you guys do your classic demos where you're training the chip right there, right in
front of us and showing us that it works. But there wasn't yet a chip that was all simulated
in, I think it was FPGAs or just simulated generally. There's a chip now, right? Tell us about the announcement.
Oh, well, I'll tell you this. Since we've last talked to you guys or participated in the AI field day, we have been on just a rocket ship of excitement. And a lot has changed within
BrainChip. We've started by licensing our IPs to a couple of key major companies in the industry,
one being Renesas and one being MegaChips in Japan. Beyond that, we had our chip validated,
and it's now a full production chip, which we call AKD1000, which our chip is called Akita, so AKD1000.
And we also introduced development systems.
These development systems are one is a shuttle PC-based development system, and one is a Raspberry Pi development system.
And these were meant for plug-and-play technology.
You can purchase them on our website, but to really complete everything,
you know, we've got, you know, these are exciting times and what we've got going on,
I think we can talk about a little bit later on throughout the show, but, you know, we've got a
lot going on from a business standpoint as we go to this commercialization, uh, uh, you know, where we are. So it's,
it is, it's good times for brain chip.
That's really exciting. Uh,
it's going to be cool to see some folks getting these in their hands and,
uh, having developers play with them for sure. Uh,
just to take a quick step back. I know,
I know we've mentioned this in passing a couple of times and then maybe folks
did see previous presentations. Um,
but I just want to point out that neuromorphic means, by the definition I'm looking at anyway, mimics neurobiological
architectures present in the nervous system. So we're really talking about an artificial brain
here. Why go that route versus, you know, just looking at the existing supply of GPUs, TPUs,
and the other kind of more standard processes that are out there for artificial intelligence? Chris, that's a great question. And I think, you know, I like to explain
this a couple of different ways. Let's start with the way that a traditional DLA or GPU or TPU
functions today is whatever it's doing to get to it, to process the information, to get some type of answer with
a level of accuracy, it's got to go through tons and tons and tons of calculations. It's all
mathematical. Okay. So what you see happen is, let's just talk zeros and ones for a second.
And it has to go through all these computational analysis to come up with some
type of level of accuracy for what it's looking for. In doing so, it's going to burn a ton of
power. It's going to burn a ton of energy. And it's going to take some time. And to a human eye,
the time might, you know, we might not even recognize it, but from a computational standpoint, it does.
So with neuromorphic, the function very similar to a brain, the way we process as humans with our brain is very unique.
For example, I'm talking to you.
You're looking at me.
You just picked up your cup of coffee, which you're grabbing with your hand.
That's senses.
You tasted it, right? which you're grabbing with your hand that senses, you've tasted it,
right? So you're using your taste. But what you're really doing right now is listening to every word
I'm saying to see if I'm going to say something that's this little nugget of information. And so
your brain's just fundamentally saying, okay, I can do all these multiple operations all at once,
but right now I'm going to consume most of my energy on the listening portion.
And that's how the neuromorphic architecture functions. So what that means is as we move forward with introducing AI into applications, especially on edge-based devices, where we have
a lot going on, we can't stack five or six chips, each one having some type of functionality.
So with the neuromorphic architecture, especially with Akita that we have with BrainChip, we can have three, four,
five different functionalities going on at the same time on a device. And we pride ourselves
on focusing on five sensor modalities, and two of which are the traditional which is vision and hearing and that's
where most ai is applied today but we are very excited about tactile sensing or vibration so
touch and also smell or olfactory and then gustatory or taste right so so once you start
applying all these functionalities to very similar to how the brain functions, that's where it gets really exciting.
And with traditional AI, you know, I don't want to say you can't do it.
It's just restricted.
And so with the neuromorphic architecture and specifically with what we have with the
KEDA, being able to do multiple functionalities on a device consuming extremely low power,
we're talking microwatts
to milliwatts. That's where it gets really exciting. And when we think of the edge,
when we think of applications on the edge, we're talking about the traditional watches, phones,
tablets, wearables. But what about all the new devices that will be introduced to the home?
And then what about electric vehicles? That's an
edge-based device. Who's the first guy that's going to get to a thousand miles on a charge?
So they're continuing to introduce compute, but they're continuing to demand that compute
is lower power, faster, and so on and so on. And that's where we see brain chip really impacting
this $46 billion market as it continues to evolve. I think that was the thing that really got us.
One of the, not spoilers here, one of the things we do in our podcast is we ask all of our guests
three unexpected questions. And one of our three questions that we ask guests, we won't ask you
this one, Rob, is how small can machine learning get? And will we have machine learning in toys,
in appliances, even in disposables? And many, many of the guests have said that absolutely,
yes, we will. But that's quite a ways off because we're still working on it. But it seems to me, after hearing from BrainChip previously and hearing what you just said,
you almost specifically answered that and said that we absolutely will have machine
learning, neuromorphic processing in everything, even low-powered devices, even inexpensive
devices, even perhaps in disposables?
I would say yes. I feel strongly about that. There are device manufacturers out there that
maybe the whole device isn't disposable, but components of it will be disposable,
and you're going to have compute in that. uh what we what we highlighted at the the two tech
field days we were we were a part of and what we're evolving if you go to our our youtube channel
where all of our content is uh you can see some of the latest demos that we're doing and it's going
to blow your mind and so that's at brain chip Inc on YouTube. And that's our channel. We have all of our, our content there,
but the fact of the matter is, is that, you know, you, you,
you eventually will, you know, we're demoing wine tasting, for example,
where Akita can tell you whether it's a Chardonnay or a Syrah and
okay. Not the most practical application,
but it's showing you basically the depth of what you can do with AI
and putting it into applications that we as consumers will use on a daily basis.
And what makes Kida so unique is its ability to do on-chip
without having to go back to the cloud or depend on any external communication.
So that provides you with a level of privacy, a level of security.
Also, what I think about is a level of customization.
What do I want as a consumer?
So as consumer devices continue to evolve and companies that are supplying technology
to these consumer applications, the ability to do some customization is going to be key.
So for example, I'm a coffee guy.
And so when I wake up in the morning,
eventually there will be a coffee machine that recognizes that I've entered the room
and turn on, but it will know that I want, I don't know, a double espresso and it will make
me a double espresso. Well, when my wife walks in the room, she might want a latte and it will have the ability to recognize what her specific
desires are for a coffee or whatever. Or there's been demonstrations of smart refrigerators where
refrigerators can smell and determine whether produce is actually rotten or how much life is left on that refrigerator. Now, all of this is, you know,
the very early stage and it's not, you know, consumers aren't getting access to this technology
yet, but, you know, we've all been around the block. So when this turns the corner,
it's going to turn in a tidal wave, not a tide pool. So what is it about the Akita processor that enables this? Because
what you're describing, frankly, it all sounds great. I think we're all like on board. Yeah,
I want my refrigerator to tell me if my eggs are rotten. I want my coffee machine to know that I
like a double or something. But what is it that makes brain chip special as opposed to AI everywhere?
Yeah. So going back to what I said earlier, it is the architecture because the architecture
based on using the neuromorphic architecture, it enables you to have this functionality that other traditions aren't capable of doing.
And when I say that, I mean that more from a power consumption standpoint.
And the other thing is when you start thinking about the decision making that you want these devices to make, going back to what I said about on-chip learning, you can be a guest at my house and I could say, hey, Steven, do you like coffee in the
morning? I do. Okay, well, let's hit this little learn button on this device. And now it knows what
your preferences are when you enter the room. That's a really neat feature. Now, take that one step further and let's sit in the cabin of a vehicle. And I do have
teenagers. And when I get a really cool car one day, I do would love to have the capability of,
you know, only robs the driver in this vehicle. So it knows who I am by voice or by face.
And if one of my kids tries to get in the
car and take it for a ride when I'm not around, the vehicle is going to say, I'm sorry, you're
not Rob. But one day, you know, when my son, one of my kids proves to me that, you know,
worthy of driving the vehicle, then, you know, let's allow you that opportunity. Doing all this without
having to go to the cloud or retraining a network is where you save a ton of time, a ton of energy,
and a ton of cost. Because when it comes to machine learning, it's not an inexpensive process
to develop networks or data sets to do this. So having the capability of doing
it on the device becomes a tremendous advantage. Absolutely. And I think, you know, we've seen that
definitely as we talk more and more about the edge in general, but also specifically about AI at the
edge or edge AI. And we've seen this, I think, from a lot of other use cases and scenarios that
we've talked about where obviously, you know, just moving a data center into your home is obviously non-practical.
But also moving a data center's worth of data, you know, from your home or wherever your location is back to the cloud can be just as expensive and time consuming.
And so this, you know, really enabling this kind of low power envelope, low space envelope, low heat envelope, I'm guessing as well.
You know, AI actually happening right there, you know, in real time.
And then you only have to send results back instead of sending the data back.
I mean, it makes a ton of sense to me.
We've talked a lot about some consumer applications here, right?
So, you know, the smart toaster, the smart blender, the smart coffee maker, the refrigerator.
Are there also use cases, you know, in the enterprise? I'm thinking of things like digital signage
and maybe things beyond that.
Maybe the things on the factory floor
or in other plants that can use something like this.
I strongly believe so.
I think that's the one area where
when all of the introduction of AI comes into play,
we're gonna be extremely impactful.
Again, we're at the very early stage of this whole process. I've used the phrase, we're at the tip of the iceberg, and I strongly believe
that. But yeah, I think the factory floor is one where you could have, for example, and we've shown a demo of this, again, some of our content. But with one-shot learning,
you would have a camera on the device,
wherever that is, let's just call it a system.
You take a couple of shots of an orange and you start rolling oranges down the conveyor belt,
it's gonna recognize they're all oranges.
And so then you throw some bananas in there
and it's gonna be able to pick bananas,
go left, oranges go right.
It just really starts to simplify
the whole industrial application
environment. Take it one step further. Let's talk about vibrational analysis. Some of these machines
that are out there are millions to tens to hundreds of millions of dollars of investment.
Being able to recognize a sound or being able to recognize a vibration, which is unique, which can highlight maintenance
issues that potentially will happen, is critical.
And these are things that don't exist today, but will be easily implemented with Akita
and technology as it continues to evolve.
So we see the industrial environment taking off.
We see medical devices. Initially, a while ago, we were involved in some COVID detection activity get accuracy levels of over 95%. And this goes back
to where we have disposables. Yeah, there's going to be devices out there at some point, which
will be able to use to recognize disease or recognize different toxic gases and so on.
And you'll need to be able to learn on the edge without retraining your networks and
getting machine learning and data scientists involved.
Yeah, I want to make sure that that's really clear to the people that are listening.
One of the things that was interesting about this is not that this is, I mean, my smartphone
has a neural processor in it now.
It's not that it's just a low power processor, but that it has this ability to do incremental learning.
And that to me was the really powerful demonstration,
perhaps even more powerful
than the low powered aspect of the chip
was what Rob just described.
And that is, I think that the chip operates the way
I think people wish that AI operated.
In other words, like you said,
I'm gonna show the chip a banana right here, right now. I'm not going to have to do a big retraining. I'm not going to have to go
back to the factory and get a data scientist involved to describe bananas versus oranges.
I'm just going to say, hey, device, this is a banana. This is an orange. Now I want you to
differentiate them. And then the other thing about it that's really interesting is that banana, this is an orange, now I want you to differentiate them. And then the other thing
about it that's really interesting is that it has this event-based system where it, instead of just
continually churning and chunking and churning and chunking, it's basically going to say, oh,
there's been an event here. It's a banana-related event. It's an orange related event. And from a practitioner of machine learning,
that is actually a very different architecture than what we get from a lot of the conventional
AI out there. And the fact that it all, that it's doing this like in real time is very,
very different from what we're seeing from many other machine learning libraries that could theoretically be running on lower powered devices. Yeah, I like to, the other example I use, and Chris,
you asked this earlier in regards to neuromorphic architecture, I like to think of it as zeros and
ones, and the traditional AI engines of today process all the zeros and process all the ones. But when we talk about events,
zeros are not events. Zero times zero is zero. Zero times one is zero. Zero times a million is
zero. So those are not events. You don't have to process those in order to get to what you're
looking for. And that's the neuromorphic architecture. So it's going to focus on the events and the ones
and figure out how to get to an accurate assessment of what it's looking for.
Super interesting. And definitely, I do a lot of work with IoT and these industrial applications,
as maybe you can tell here. And I just see the potential of what if you could make every sensor
a smart sensor, where it's not just relaying information back through a gateway, through a network, through this whole stack, and then pushing information back out.
But you're actually making decisions right there, you know, at the sensor.
So you've got maybe, you know, sensor actuator pairs, this kind of thing where you can be doing real work, you know, and this is deep, deep edge, right?
I mean, this is, you know, you know, sensors you could attach to just about anything out there.
That's really, really exciting stuff, for sure. You know, and I don't want to put words
in your mouth or get too far out in front of us here. But as we're talking about this, right,
and we're talking about, you know, especially, you know, the kind of, you know, taste sensors
and smell sensors, right, when we're looking into that stuff, and, and being able to attach that to
a neuromorphic chip, you know, is there something down the line here of kind of the lab on a chip
medical testing, like out in the field out in the wild, that could come from this?
I hope so. I truly hope so. One of the things we really want to focus on and, and we bring up by I
do a podcast and I talk about it with all of our guests as well, is what we call beneficial AI.
I mean, let's let's call it like it is. We want to see good things come from this.
And there's such a depth and a breadth of what you can do.
But one of the things that gets me excited
is being able to do remote analysis
or remote medicine per se.
So having labs in areas where we don't have or just having access to technology in remote areas,
I think is very important. So if we were all in the medical profession and we were just in remote
areas or different countries where they don't have all of this technology today,
and they had the capabilities to make some decisions or do some analysis and get a level of accuracy to help them save someone's life or something to that extent, I think is very important.
And so we are very excited about those uses of AI out in the global South or, you know, just something that's outside of, you know, North America and Europe. There is
definitely a lack of exposure of AI to a lot of folks out there that could be doing really
interesting things with this. And part of it is just the cost and the scope, right? If you have
to connect to a data center and pay a bunch of money, it's not going to work, but this seems to
be a potential chink in that armor that could let a new wave of, of explorers kind of come into the AI fold.
That's the plan. So wholeheartedly we have a university strategy.
It's a global strategy, you know,
but it's one foot in front of the other right now. And, but we're,
we're almost there as we continue to have our product continually to get validated and the technology is becoming more accepted and eventually widely accepted and starts to get adopted, we need universities and students and professors to be actively involved, not just with Akita, but in the world of AI.
And on that note, I want to get back to today's announcement. So what you've announced today
is the availability of the brain chip processor, but not in a sort of industrial bulk sense, right?
This is something that real regular people,
like the people who are listening to this
could go get one, right?
So tell us how that works.
Yeah, so let's be clear.
Today's an exciting day for BrainChip.
And I hinted at it earlier on in this conversation,
but what we're announcing today
is really the full commercialization of our AKD 1000 product line.
Now we have a full stack of technology available for all different levels of people wanting to get access to AI.
We're extending today full production releases of our Akita mini PCIe development board. And getting that board into
as many hands as people, as many hands as possible is what we want to accomplish. So we've put out
an announcement today, kind of screaming to the world that, you know, come get it. Before we sell
out, come get it. And you can take that akita board you can plug it in you can
start using it you'll you'll download the drivers and so on and start working with our environment
we have our development platform meta tf which is easily accessible and you can start using the
model zoo and there's 2025 plus different examples you can start working with
and that just highlights and emphasizes the the ability to actually acquire silicon and acquire
chips in volume as well so now you have volume production of development boards available
you don't just get you have to buy one you can buy as many as you want or you can contact us as well to start purchasing chips and that goes complements
our our ip licensing that we've we've had success with and will continue to have success with that
complements our plug and play development systems that are linux based and and raspberry pi based as
well so we have a variety of different ways for people from the entry level to get involved
and start learning to use Akita and this neuromorphic architecture all the way up to
the companies that know they're going to design it into their SOC and make it a functional part
of their product offering. This is a really, really strong announcement.
And one of the things we're very proud of
is we are the first neuromorphic architecture
to be fully commercialized.
And this is, I'm gonna say it again,
it's the tip of the iceberg
because this leads us to,
okay, we've got AKD 1000 in full commercialization, What's next? And that's where it gets really exciting. So, you know, we have plans for future technology as well, but we've got this, the machines things and try it out because, of course, you've been teasing me with it at AI Field Day and on the podcast here for quite a while.
So I just can't wait to try it.
Well, thank you so much for the update.
Thank you so much for joining us today to talk about this.
But now the time has come for us to, I don't know, test your neuromorphic computing.
We're here to put a couple of oranges and bananas in front of you. On this part of the podcast is a fun tradition. We're going to ask you
three questions and a note to our listeners. Our guest has not been prepared for these questions.
We haven't warned him what the questions are going to be, and we'll see what he comes up with right here on the spot. Also, we're going to have a question coming in from
the outside from a previous guest, as well as one from me and one from Chris. And if you'd like to
join us, please do. We'll tell you how to do that in a minute. Chris, why don't you kick it off here? What's
your question for Rob? Yeah, so we talked a little bit about cars and maybe being able to kind of
lock your kids out of the car or allow yourself and your wife to drive the car. But I wonder
what you think the timeline is for true self-driving cars. When are we going to see those kind of in mass on the road out there?
Well, that's a good question. I know they're on it. And I know that, you know, certain companies
have started to grid cities all over the world or outside of outlined major metropolitan areas to start implementing self-driving environments where eventually,
you know, in a metropolitan area there, it will all be self-driving vehicles. Uh, just like we've
seen in the, you know, the science fiction movies of when we were young and we see today, uh, when
it's, I, I, you know, that's a good question. I, I would say five to 10 years, we'll have self-driving vehicles.
But it will be in some cities throughout the world, including at some point in the US.
All right. My question for you is a little different. I'm wondering if you can think of any jobs that will be completely will eliminate jobs in general, because I think there's the human element to decision making that no matter how talented the engine is, you know, it doesn't address the human element. So maybe on the front end of
taking some data and being able to get it to 97% accuracy, but there still has to be that human
element. People buy from people. And there's some emotion that's involved. And that emotion is something that AI,
although it might calculate it to a certain level
and be able to determine maybe Rob's really happy right now,
it's a buying time.
It doesn't trigger it all the way.
So when I look at it from a broad perspective,
I traditionally say there's a human element of emotion.
There's a human element of irrationalness
and a lot that goes with it that AI can't. AI wouldn't predict today that I would be here on
a headset when I was supposed to be on a mic in my office because we shut our office down for a
couple of weeks late last night. It wouldn't have done it. So I deal with it, right? Those are the
types of
things that it's not going to be able to ever, I don't want to say ever, but that's a tough thing,
but I don't think it's going to be able to address that based off of humans being irrational and
emotional. Well, thank you for that. And now, as promised, we're going to use a question from a
previous guest. This question comes from Gerard Cavalinas, the founder of
TechHouse570. Take it away, Gerard. Hey, this is Gerard Cavalinas, the founder of TechHouse570.
I also work for Helium Systems as a managed services systems analyst. My other question is,
what is it that scares you about AI in today's industry? Like anything else with technology, I just want to see it be to be used in a productive environment.
I mentioned before that I have, you know, teenagers and younger kids and, you know, some college kids and they were just home for the break. And I noticed when we were all together after dinner watching a movie,
we were, you know, I have a very active family and three dogs. So now, you know, I got a lot going on.
But we were all watching this movie. But every single one of us, including myself, was on our
device. And I was a little bummed out. And so I look at that in technology in general. Now to bring it back, you know, a decade, you know, I worked for Arm and I worked for a company that was driven at providing great technology and still does, but it was driving the mobile device environment to get to where it is today.
So I was an enabler to what I, you know, that what we're doing today,
which is staring at our phones. So I kind of, you know,
when I look at AI, I know it's, it's going to,
especially for maybe when my children are adults or my children's,
my grandchildren, or my children's, my grandchildren,
or whenever that happens, there's going to be decision-making that takes place that
instead of them going through the process that we all go through today, they're going to use
these devices, whatever they are, to cut out 70% of the effort of making that decision. And when I think about that, that does concern me, I think
that they, you know, we need to, to keep the human mind sharp. And it's just like when, you know,
when we were kids, we were not allowed to use a calculator to do math in class. But nowadays, kids use calculators in class. So we had to do it the hard way. So
I think that's a generational thing. And it's just part of accepting technology for what it is.
But then again, I think a lot of good is going to come out of it as well.
I imagine that in the future, kids might not be able to know the difference between an
apple and an orange, or maybe they won't know the difference between a Shiraz and a Merlot,
because brain chip will be doing it for them. Exactly. So, you know, that won't be the case.
But, you know, there will be scenarios where, you know, you got to use the technology the right way.
Well, thank you so much for joining us today, Rob.
We do look forward to your question for a future guest if you have one.
And if our listeners, as I promised, if you want to join in the fun, you can.
Just send an email to host at utilizing-ai.com, and we'll record your question for a future guest.
So, Rob, I know that this is a
big day for you. Where can people connect with you and find out more about this announcement?
Go to shop.brainchipinc.com. You go to our website, there's going to be a button that says
buy now. You can get access to the boards, you can get access to the development systems,
or you can contact us at sales at brainchipinc sales at brain chip.com and start a dialogue with me or my team.
And let us see what we can do to help you.
But thank you again, guys.
How about you, Chris?
Is there anything going on in your world?
Lots of things are going on.
All of it can be found at chrisgrundundevin.com or chat with me at Chris
Grundevin on Twitter or find me on LinkedIn as well. And as for me, you can find me at S Foskett
on most social media networks. And of course, I do encourage you to check out the upcoming AI
Field Day event where we will probably be diving deeper into this subject. So thank you for listening to the Utilizing AI podcast.
If you enjoyed this discussion,
please do give us a rating, review,
wherever you get your favorite podcasts.
It's available pretty much everywhere.
But of course, iTunes and YouTube
and places like that are popular.
This podcast is brought to you by gestaltit.com,
your home for IT coverage from across the
enterprise.
For show notes and more episodes, go to utilizing-ai.com or connect with us on Twitter at utilizing
underscore AI.
Thanks for listening, and we'll see you next time.