Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 2x19: Running AI Everywhere and In Everything with Intel

Episode Date: May 11, 2021

AI processing is appearing everywhere, running on just about any kind of infrastructure, from the cloud to the edge to end-user devices. Although we might think AI processing requires massive centrali...zed resources, this is not necessarily the case. Deep learning training might need centralized resources, but the topic goes way beyond this, and it is likely that most production applications will use CPUs to process data in-place. Simpler machine learning applications don’t need specialized accelerators and Intel has been building specialized hardware support into their processors for a decade. DL Boost on Xeon is competitive with discrete GPUs thanks to specialized instructions and optimized software libraries. Three Questions How long will it take for a conversational AI to pass the Turing test and fool an average person? Is it possible to create a truly unbiased AI? How small can ML get? Will we have ML-powered household appliances? Toys? Disposable devices? Guests and Hosts Eric Gardner, Director of AI Marketing at Intel. Connect with Eric on LinkedIn or on Twitter @DataEric. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett.   Date: 5/11/2021 Tags: @SFoskett, @ChrisGrundemann, @DataEric, @IntelBusiness

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Utilizing AI, the podcast about enterprise applications for machine learning, deep learning, and other artificial intelligence topics. Each episode brings experts in enterprise infrastructure together to discuss applications of AI in today's data center. Today, we're meeting with Eric Gardner from Intel, who's going to talk about AI everywhere, running on everything. But first let's meet Eric. Hi everyone. My name is Eric Gardner.
Starting point is 00:00:31 I'm the director of AI marketing at Intel. Really passionate about helping enrich more lives, people across the planet using AI and helping the benevolent machines to take over more of the tedious work and free us up to do more of the things that we love. And I'm Chris Grundemann, your co-host today. I am a consultant, content creator, coach, and mentor. You can learn more on chrisgrundemann.com. And as always, I'm Stephen Foskett, organizer of Tech Field Day and publisher of Dishdolled IT.
Starting point is 00:01:01 You can find me on most social media networks at S Foskett. So Eric, we recently were privileged to be part of the Intel Ice Lake announcement. We previously, of course, have seen Intel at our AI field day and other field day events. And throughout all of these, the thing that occurs to me is that maybe people kind of have AI and machine learning specifically all wrong. It seems if you mention machine learning, everybody immediately thinks basically an ML mainframe, like everything centralized with a lot of hardware and everything. And maybe that's valuable. And certainly that's something that's going to be used in some parts of the industry. But in this podcast, one thing that we've learned is that machine learning is being deployed everywhere and running on everything.
Starting point is 00:01:46 I mean, we've talked about doing ML processing on little remote CPUs, even mobile devices. We've talked a lot about industrial IoT and computing at the edge. And I think that this is the message that I got from Intel during the Ice Lake launch as well. Is that the Intel perspective? Absolutely. We kind of like to use the analogy that AI is a little bit like Wi-Fi. You know, I remember back in the day,
Starting point is 00:02:11 it was kind of, you know, really cool if your device or your computer had Wi-Fi and then it just became, you know, a table stake and you don't have Wi-Fi. It's like, how come you don't have Wi-Fi in there? Why is it not Wi-Fi enabled? And we see the same thing happening with AI is that it's going from sort of more the labs and the research institutions and the very large cloud providers to more broad application across every different kind of app from the data center
Starting point is 00:02:36 to the edge. We see really every application being able to benefit from the power that machine and deep learning bring to it, bring greater intelligence and to extract more insight from the data that's crossing every application. I think it's on a sense. And also you touched on a couple of things there, right? Which is the intersection of artificial intelligence and machine learning and deep learning. And I think just like Steven pointed out that a lot of people, when they hear AI, they jump directly to massive racks of GPUs in a big data center somewhere. I think these days, it seems like a lot of people also jump directly to some kind of deep learning neural network type application when they hear of artificial intelligence. But it's also more broad than that,
Starting point is 00:03:24 right? I mean, there's multiple applications of AI. Maybe you can talk a little bit about some of the ways that Intel looks at AI as distinct from the individual types like machine learning or deep learning. Yeah, because you guys know many of the audience, I'm sure AI has been around for 30 plus years and it's gone through a few different cycles of AI winters and AI revolutions. And recently deep learning has a few different cycles of AI winters and AI revolutions. And recently, deep learning has really come onto the scene, and largely as a result of GPUs and the highly parallel processing that they can provide to help train these models and run these really complex deep learning training jobs that enable things like computer vision and speech detection,
Starting point is 00:04:02 natural language, et cetera, has really led to a big renaissance lately of AI. But AI is really broad. Certainly deep learning is a very exciting space and deep learning training requires a lot of highly parallel compute. But when it comes to actually deploying those deep learning models, what we're seeing is that most customers
Starting point is 00:04:21 and most applications actually don't require specialized hardware, especially as the performance of the CPU and the built-in acceleration continues to advance. They're able to do that on the processing where their applications already run, which is a lot more efficient and streamlined than having to insert really power-hungry, costly, maybe lower reliability products to help accelerate that. And even beyond deep learning, you know, deep learning is great, but it's not great for every problem. You know, deep learning requires a lot of data.
Starting point is 00:04:54 And in many cases, it's hard to get that data. It's hard to, you know, label all that data to be able to train it. And maybe you don't need the level of accuracy that you do with deep learning. So for machine learning, you know, things as simple as regression or, you know, support vector machines or many other Bayesian and other techniques are great for most applications. And, you know, in some cases, deep learning is overkill. And the great thing is that you don't need specialized acceleration either for machine learning. So it's a really broad range and broad range of usages. And we like to advise customers that really pick the type of AI that you need for your solution.
Starting point is 00:05:32 Don't just kind of go with the flow and go with the bandwagon that said, hey, it's got to always be deep learning. And on that note, of course, when we're looking at edge data center applications, I don't mean edge is kind of a funny term. We've talked about that recently as well. When we look at sort of edge data center, most of those systems don't indeed have specialized distinct processing units. But many of those processors do have hardware capabilities that can be used to accelerate machine learning tasks, special instructions for doing matrix multiplication, for example. And I think that that's something that Intel is putting into the CPUs now. Absolutely, yeah. For the last little over half a decade, we've been building in more acceleration into the CPU, both from a hardware standpoint, you know, with things like Intel's deep learning boost with, you know, vector neural network instructions, you know, being able to go down to a lower numerical precision, like with Int8 with great performance, really helps speed things up. But also, almost as, or if not more importantly, software optimizations. Most of the code and the tools and the frameworks and libraries for machine learning and deep learning
Starting point is 00:06:55 were really not optimized to take advantage of all the parallelism, the vectorization, the memory bandwidth, all the capabilities of a modern CPU. And so we've been doing extensive work to optimize all those software libraries. I mean, with the Ice Lake launch, we just announced, you know, if you're using sort of standard off-the-shelf scikit-learn, you could be leaving up to 100x performance in the table, you know, by upgrading to the Intel optimized version of that.
Starting point is 00:07:20 Similar for TensorFlow, you know, up to 10x performance improvement using the Intel optimized version. And we're continuing to try to get those integrated into the main distribution. But, you know, definitely, you know, a lot of performance advancements with the CPU and most of our customers, you know, they don't want to have to add something specialized in there. You know, we work with Burger King, we work with GE Healthcare, they're like, you know, I want to be able to use the compute already used to run the applications I have at the edge in an MRI machine or, you know, in a fast food restaurant without having to add, you know, complex and costly new acceleration to the mix. Yeah, absolutely. And we saw this recently with some of the Tech Field Day
Starting point is 00:08:01 presentations by Intel where you showed the performance, especially of DL Boost. So if folks haven't heard of that, that's definitely worth a quick Google if you're excited about machine learning and deep learning applications. And of course, you're not saying that GPU is wholly unnecessary or specialized hardware is wholly unnecessary.
Starting point is 00:08:20 You're just saying that machine learning applications are gonna be running all over the place and maybe you don't need specialized hardware everywhere. Is that right? Absolutely. Yeah. We like to say there's no one size fits all approach. In fact, we're building our own GPUs as many of you hopefully know as well. We have CPUs, we have GPUs, we have FPGAs, and we have dedicated AI accelerators and each one has its own use case. And, you know, we're building all those things not because we want to have a big complex portfolio of hardware products, but because the customers are demanding,
Starting point is 00:08:51 you know, specialized solutions for their own needs, you know, for most customers and for most AI, that's the CPU. But if you're talking about having, you know, running a ton of deep learning training, maybe you've got some HPC applications, real-time graphics, you know, et cetera, GPUs are great for that. FPGAs are great for pathfinding, for advanced, you know, things in AI. Our partner Microsoft is doing a lot of really unique stuff at really low bit precisions and, you know, unique workload accelerations with them. And then, you know, dedicated AI accelerators, you know, 100% focused, dedicated just to AI, you know, opt for the best possible performance per TCO, if that's what you care about, and, you know, one thing you mentioned earlier was about the libraries.
Starting point is 00:09:46 And I think it's something, at least within the Tech Field Day community, we've talked about a lot, is there's almost this slightly unknown or kind of, you know, maybe unintentionally kept secret from Intel of all the software you guys put out. And so I wonder if you could talk a little bit about, you know, some of those things like OpenVINO, obviously, I think is directly related here. The zoos, right, the analytic zoo, and I think there's a couple others of kind of model repositories that are really interesting. And then also, you know, to that point of kind of all the different hardware types and versions you have out there, one API becomes really interesting as I'm trying to kind of homogenize this heterogeneous environment. Yeah, absolutely. Maybe I'll start with one API since you mentioned it. I mean, one API is you mentioned it. I mean, one API is an open standard. It's open, freely available to everyone. And we're really trying to standardize to not have any kind of proprietary, you know, base level libraries that anyone can program to. And we're adopting that across our portfolio. So, you know, it'd be more seamless
Starting point is 00:10:40 to go from CPU to GPU to FPGA to ASIC, no matter what you need. Easier for programmers to sort of program between those things. In terms of AI software, yeah, there's a ton going on. It's hard to get the word out about all of it. I'm the marketing guy, so I know. But if you are using a popular machine or deep learning library or tool or framework like TensorFlow or PyTorch or Scikit-learn or Pandas or XGBoost, there is an Intel optimized version of it.
Starting point is 00:11:08 And you very likely have an Intel CPU in your data center or maybe your laptop. I urge you guys to definitely go download those and check those out. We package them in the Intel OneAPI toolkit. You can also just do a simple pip install, get the binary, pretty easy to get access to those. Beyond that, we also have a few tools. You mentioned one of them in Analytics Zoo. Analytics
Starting point is 00:11:30 Zoo is great if you have a big data or a new Spark cluster, and you're looking to add some AI capabilities to that and still maintain the same data pipeline, the same infrastructure that you've got. Analytics Zoo has been helping a ton of customers, Burger King, SK Telecom, Office Depot, the list goes on to really just upgrade to AI from where they're at in big data infrastructure. And then maybe the best of all, I'd say, is Open Vino, which is a fun name, but it's a great tool to help deploy deep learning inference on any hardware target anywhere. It takes a pre-trained model and it optimizes it to make sure
Starting point is 00:12:09 you get the absolute best performance out of it. Whether you're running a CPU in the data center, a CPU at the edge, a vision processing unit, whatever, no matter where you are. And some of our customers thought that they needed a GPU to go run their inference and through using OpenVINO, we said, some of our customers, you know, thought that they needed a GPU to go run their inference and use it through using OpenVINO.
Starting point is 00:12:27 Oh, boy, actually, like we were able to run our inference on our existing CPU right alongside our application and use less resources and deliver better performance than we even needed. So it's a really, really great tool. And there's a ton of models built in as well that, you know, you don't need to be a data scientist. If you're an app developer, you can take models kind of off the shelf and build them into your application. So that continues to grow and expand. So definitely urge folks to check that one out. Yeah. It seems like to sum up, it seems like there's a lot of, I mean, AI is about a big, it's a big topic. There's a lot of different things, a lot of different ways that it's touching us, ways that it's being used in applications. And also, I guess a second summary, and this is something that we've really heard loud and clear at our Tech Field Day and AI Field Day
Starting point is 00:13:16 events. Software is the key to unlocking AI. And, you know, without having these libraries, without having optimized versions of, you mentioned pandas, for example, without having that, people are not going to be able to take advantage of the features of this hardware, because, you know, you just can't expect people to, you know, hand code things for a specific piece of hardware if it's being run everywhere. And that's been something that's come up many times on the podcast here, especially this second season, as we've had more and more companies that are developing sort of AI anywhere, AI everywhere approaches. But that being said, certainly there are use cases for certain hardware types. I mean, what is the, what are the, in your mind, the classic implementation use cases that you're seeing today for AI applications?
Starting point is 00:14:14 Yeah, good question. I mean, certainly it is so much about the software and, you know, that's one of the big challenges. I think a lot of the specialized AI accelerators is getting to software scale and support for a number of models and that robustness kind of thing and mentioned we've been working on that for a number of years on the cpu and and you know other accelerators um yeah i would say the place to start really is the cpu i mean if you're using the cpus you've already got i mean probably in your data center, in your laptop, et cetera. Use those, make sure you're using the optimized software libraries and see what your performance is. I mean, for the vast majority of machine learning and the vast majority of deep learning inference, and even some deep learning training, especially as we're talking about transfer
Starting point is 00:14:58 learning and maybe not as deep and complex models, the CPU is great. Or if you have CPU resources available, you can even train those very complex deep, the CPU is great. Or if you have CPU resources available, you can even train those very complex deep learning models with great performance. Once you exceed a certain amount of particularly deep learning training, then we start to talk about accelerators. Like with GPUs, if you have a number
Starting point is 00:15:19 of other highly parallel workloads you're running, that could be a good option. If you're doing just a metric ton of deep learning training, then even looking beyond that at dedicated AI accelerators like Intel's Hibana could be an option for you. Similarly, for low power kind of applications at the edge, if you have a very, very low power requirement and you're doing deep learning inference, that's an opportunity as well to specialize and go from CPU to a dedicated AI chip, perhaps like Intel's Movidius VPU chip, which is being used in many different applications, you know, from drones to
Starting point is 00:15:56 John Deere tractor equipment, et cetera. And one piece in there that you mentioned that I think I want to tease out a little bit further is that, you much of this i mean obviously you know we're talking about ai everywhere a lot of this is about inference and and using models at the edge to make decisions but it feels like and you mentioned there that you know some of these with dl boost and some of these cbus can be used for training as well and i wonder if you're aware of, you know, use cases where folks are doing kind of training at the edge, so to speak, right, where, you know, we want to kind of continuously improve these models while they're in use, and how much of that is actually going on. It seems like an area that should be growing, but I honestly don't know. Yeah, I mean, we kind of talk about AI
Starting point is 00:16:41 in some senses as like, it's very focused on sort of just the modeling part of it. And it's really a broader workflow. It's from, you know, finding what your problem is to getting the data to wrangling the data and making it work in your model to figuring out the model, training it and ultimately deploying it in production in that scale. But even then, once you've gotten through that whole process, you know, no model will stay static. You know, things are constantly changing like we've seen with whole process, you know, no model will stay static. You know, things are constantly changing.
Starting point is 00:17:05 Like we've seen with the pandemic, you know, and retailers models for inventory, you know, got completely upended. And so it needs to be continually evolved and adapted, you know, doing some level of continuous retraining. And for many of the models, I think Andrew Ang was the one who famously said this, right. It's like, I train a lot of models on my laptop. You know, it doesn't require huge big iron to go and do that. I think especially as the state of the art of training advances and we'll have a lot more models that are trained
Starting point is 00:17:35 kind of off the shelf and that you can customize through transfer learning and train sort of the last few layers to customize it for your needs. I think more of the training is going to become more distributed and not so focused in sort of the hyperscale data center. That's really interesting. And again, it ties into another piece of this idea of we're really going to put AI everywhere. If AI is going to be in everything, it needs to be fairly easy and fairly simple to do. And maybe even beyond developers, right? I think there's probably applications for operational technologists and maybe even just, you know, IT operations folks
Starting point is 00:18:10 and people in other fields, maybe even doctors and nurses, right? On the fly, being able to, you know, use a system that does some of this without necessarily having to send it all the way back to developers and moving forward, right? I mean, at least to me,
Starting point is 00:18:24 that seems like where, for it to be everywhere, it needs to be almost with everyone as well. Are you seeing that? Absolutely. Yeah, everywhere and everyone. I mean, like I said, it really has started in sort of the academic space and the research space and the advanced cloud provider space. And it's starting to move now into more enterprises, into more places of academia, but there's so much farther that it can go, right? And even beyond developers, like you said, we're trying to reach more developers, go beyond the data science community, like with things like OpenVINO, making it easier to just insert that without having to do all the model training and stuff into whichever application. But even beyond that, how can we make it even more user-friendly? And I mean, AI may even help to make it more user-friendly
Starting point is 00:19:07 and to help us reach audiences beyond developers. So I think that's very exciting everywhere and everyone. Maybe not everywhere. I mean, I don't know if we want AI in changing rooms and places like that, but certainly in more places, I think is a good thing. Well, I don't want you skipping forward to our three questions segment already. So let's get to one other topic as well. We actually last week talked about autonomous driving. And I know
Starting point is 00:19:38 that Intel is deeply involved in that space as well. You know, I assume that that requires a different flavor of hardware as well. Yeah, the requirements to ingest and process all of the data coming in and you want as much data as you could possibly gather to make the best decisions when you're driving a car autonomously, from LiDAR to visual to GPS and know, you know, GPS and everything,
Starting point is 00:20:06 it requires a significant amount of processing. And so, you know, with our Intel's Mobileye solution, you know, they have some custom hardware, you know, but really the most of the secret sauce I would say is, you know, the model development and the model building and testing it to make sure that, you know, you have reduced or eliminated, if possible, as many black swan type events as possible so that you don't have, you know, unfortunate incidents as we've seen already with some of the autonomous vehicles that are out on the roads that weren't anticipated and weren't part of the model. So, but I'm personally very excited because my kids are three and seven. And so I figure I've got, you know, what is it, you know, about nine years to have autonomous
Starting point is 00:20:48 vehicles, you know, take over so I don't have to teach at least my daughter how to drive. Yeah, my daughter actually said the same thing. She said, I'll never have to learn to drive because the cars will drive me wherever I want to go. I'll just sit in and tell it where I want to go. I'm not sure that we're going to get there anytime soon, generally speaking, but I certainly do think that we're going to have very, very strong driver assistance
Starting point is 00:21:10 in every car going forward. So I think that a lot of people may not be familiar with the Mobileye product, but I think that that's an interesting area as well, that Intel is right there deploying a lot of that. And of course, there's a long lead time on a lot of these applications as well. So I think some people might say, where is the, you know, where's the results? Where's the, you know, where's myself driving? Where's my flying car? But of course, it takes a long time to build these things. Are we
Starting point is 00:21:40 going to start seeing these hitting the road in the next few years? I believe so. You know, there's certainly phases to it. Phase 1, 2, 3, 4, 5, I think is the auto standard for AI. We're already seeing it on the road today, you know, which I think is a point of discussion. You know, is it better than, you know, flawed, you know, tired or impaired human driving already? Is it quicker to react? Things like that.
Starting point is 00:22:07 I think we still have a ways to go. I think for us to have truly a really wide fleet of autonomous vehicles out on the roads, I think we're talking a good amount of time. Will we start to see more and more capable machines on the road in the next few years? Absolutely. But don't sell your car just yet. That makes sense. You know, one thing that has come up, I was actually attending an MIT-led conference a couple of weeks ago, and one of the speakers talked about, he answered a question of, you know, should I build an AI-first organization, an AI-first company? And he said, no, that's a
Starting point is 00:22:44 terrible idea. AI is a tool like any other, and you wouldn't create a AI-first organization, an AI-first company? And he said, no, that's a terrible idea. AI is a tool like any other. And you wouldn't create a Python-first company. And you wouldn't go out and try to build a Python project just like you shouldn't go out and try to build an AI project. It's what problems are you trying to solve and then find the right tools. And I wonder if you've seen that. There seems to be a little bit of AI washing in the industry these days.
Starting point is 00:23:03 And so I wonder what your or Intel's take is on how to leverage, you know, not just the AI functionality, but kind of the broad portfolio that's enabled there. Yeah, absolutely. I mean, there's certainly a lot of hype right now about AI. And I agree with you. I think that AI, like I described before, it's kind of the Wi-Fi example. It's a great tool. It's a great feature.
Starting point is 00:23:22 It helps to solve problems and challenges that you have. And, you know, you certainly want to think about as you're, you know, building up a company or starting a company, you know, being very data-driven and making sure that you're able to take advantage of all the data that's flowing through your company and extract the valuable insights from it. And AI is a great tool for that and for making, you know, a lot of your applications, almost all of your applications may be smarter. But to make it an AI-focused company, I think, is a little short-sighted or is focusing on sort of on what's hot right today as opposed to, hey, what problem am I solving for my
Starting point is 00:24:00 customers and how can AI help me do that is more of the approach that I think would be more successful. Yeah. I mean, because ultimately as, as you're saying, AI is not a product. AI is a feature or a design choice that a company is making and it has to serve some need for a product. So interesting. Absolutely. Well, you know, just to give you a chance to sort of sum up then, you know, what's your pitch? I mean, how do you feel companies should kind of come away from this when they're looking at deploying AI and machine learning applications and so on? Do they need specialized hardware? I'd say this is a cop-out, but it depends. I'd say for most companies, like I said, start with what you've got. Start with a CPU. Make sure you're using all the
Starting point is 00:24:52 optimized software that is available to you. And you'd be really surprised all that it can do, starting with machine learning, a lot of deep learning training as well, and deploying deep learning inference for many different applications from cloud to the data center edge to the actual edge and devices. And once you determine what your requirements are and once you've tested with what you've got, look at the options that are available to you, from GPUs to FPGAs to dedicated
Starting point is 00:25:24 100% AI specialized processors. There's a lot of choice out there and there's no one size fits all approach. All right. Well, I think that it's time to shift gears here quickly into the lightning round, the fun part of our program, where we ask you three questions that you were not prepared to hear and get your answer off the cuff. Now, warning, Eric already said that he listened to the podcast before, so maybe he does know some of our three questions, but that's not the point.
Starting point is 00:25:52 So let's give it a shot. Let's give it a shot. So first of all, how long will it take for us to have a verbal conversational AI that passes the Turing test and fools an average person into thinking they're talking to just another person? That's a good question. Again, I'll say I think it depends. If it's a pretty basic conversation, if I'm trying to track a package or trying to check my order status for something, I think it can pass the Turing test. If it's a much more advanced conversation,
Starting point is 00:26:29 I think it's gonna be a few more years until we get to that. And it'll depend on what space we're in to have a machine that we can really believe is a human on the other end of things. So I think to some extent we're already there, but there's a ways to go to really increase the breadth of machines that can pass the Turing test. Now, I know that Intel is a very progressive thinking company. And I know that in the past, when we've spoken with Intel for this very podcast, the conversation has turned toward bias and ethics and data modeling and so on.
Starting point is 00:27:03 So I'll throw this to you. Is it possible to create an unbiased artificial intelligence, or do we always have to face biases? That's a good question. I'm an optimist, so I think anything is possible. I certainly think that we have big challenges in the ethics and bias space, particularly in AI, and that there's not been enough attention focused on that. And we really need to focus more of our attention and the community's attention on ethics and bias and trying as best as possible to remove that from our models. I think it's a really important question for businesses as well to think about and not just sort of an afterthought of the project. I think it's a really important question for businesses as well to think about and not just sort of an afterthought of the project. I think it's something that needs to be thought about before you even start to collect data or build your models is how are we going to eliminate or mitigate as much as possible unintended ethics, bias, risks, things like that. So we have a ways to go. I'm confident we can get there,
Starting point is 00:28:07 but we need to do a lot more in that space. And finally, you did mention that Intel has products, specialized products for lower end devices. You mentioned drones and appliances. So let's take that question then. How small can machine learning get? Will we have machine learning powered household appliances, toys? Will we have disposable machine learning gadgets? Yes, yes, and yes. We already have a sub one watt chip in the VPU, the vision processing unit at Intel. And the possibilities with already less than one watt are pretty astounding. And I believe that we'll only continue to shrink that
Starting point is 00:28:53 and make more things possible, and as well as shrink the cost of deploying AI into everything. You could even use tiny little batteries to power your AI powered toy or device at some point in the future. So I think it will go to be more and more places. And that's exciting and in some ways frightening.
Starting point is 00:29:16 And we have to really think about the ethical and the implications of that and making sure that we're designing the technology in a way that is private and good for humanity in general. Well, thank you very much. And I'm glad that you brought that up as well. We hadn't turned to that in the conversation. So it's good that we do get to the topic of bias in AI at the end here. So thank you so much for joining us today. Where can people connect with
Starting point is 00:29:46 you, Eric, and follow your thoughts on enterprise AI and other topics? Yeah, the best place to connect with me is on LinkedIn, Eric Gardner at Intel. You can also follow me on Twitter at Data Eric. Great. Chris, what have you been up to lately? Yeah, lots of things. I actually have a bunch of reports that should be coming out from GigaOM throughout the course of this month. I love having conversations on LinkedIn. You can also follow me on Twitter at Chris Grundemann.
Starting point is 00:30:12 And my website, chriscrundemann.com is the hub of all those things. Excellent. Well, thank you so much. And as for me, I have been busy at work planning our AI Field Day event. If you go to techfielday.com and click on the AI Field Day icon, you'll see that the event is actually coming up here in just a couple of weeks. We have a lot of great companies presenting, including one that may sound awful familiar after you're talking to this or listening to this podcast.
Starting point is 00:30:43 And a lot of great delegates, including one that may seem awful familiar after you're listening to this podcast. And a lot of great delegates, including one that may seem awful familiar after you're listening to this podcast. And I can't wait to present that May 26th through 28th. It'll be good to have you all tune in and join us for that. So thank you very much for listening to the Utilizing AI podcast. If you enjoyed this, please do head over to iTunes,
Starting point is 00:31:04 click subscribe, give us a rating, give us a review. That would really help. And please do share this episode and the podcast generally with your friends. That's how word of mouth spreads and it really does help us. This podcast is brought to you by gestaltit.com, your home for IT coverage from across the enterprise. For show notes and more episodes, go to utilizing-ai.com or find us on Twitter at utilizing underscore AI. Thanks, and we'll see you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.