Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 17: Building a Hybrid Cloud Platform To Support AI Projects with Red Hat @OpenShift

Episode Date: December 15, 2020

In this episode, we ask Red Hat about the platform requirements for AI applications in production. What makes AI applications special and how does this change the infrastructure required to support th...ese? The demand for flexibility, scalability, and distribution seems to match the capabilities of a hybrid cloud, and this is emerging as the preferred model for AI infrastructure. Red Hat is supporting the container-centric hybrid cloud with OpenShift, and containers are also critical to AI workloads.  Red Hat has production customers in healthcare, manufacturing, and financial industries deploying ML workloads in production right now. Episode Hosts and Guests Abhinav Joshi, Senior Manager, Product Marketing, OpenShift Business Unit, Red Hat. Find Abhinav on Twitter at @Abhinav_Joshi. Tushar Katarki, Senior Manager, Product Management, OpenShift Business Unit, Red Hat. Find Tushar on Twitter at @TKatarki. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Chris Grundemann a Gigaom Analyst and VP of Client Success at Myriad360. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann Date: 12/15/2020 Tags: @SFoskett, @ChrisGrundemann, @Abhinav_Joshi, @TKatarki, @RedHat, @OpenShift

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Utilizing AI, a podcast about enterprise applications for machine learning, deep learning, and other artificial intelligence topics. Each episode brings experts in enterprise infrastructure together to discuss applications of AI in today's data center. Today, we're discussing a hybrid cloud foundation for AI applications with our friends from Red Hat. First, let's meet our guests. From Red Hat, we've got Tushar and Abhinav. Why don't you go ahead and say something about yourselves? Hi, Al, and hi, Steven, and hi, Chris, and Abhinav. I'm Tushar Katarki. I am a senior manager for product management for OpenShift, which is our container and hybrid cloud platform.
Starting point is 00:00:47 And I'm the lead for AI and machine learning for OpenShift as a workload on OpenShift and work with customers and partners to get them onto using OpenShift. And we'll talk a little bit more about it later on. That's my background. I'm based out of the Boston area. Hey guys, this is Abhinav Joshi. I'm a Senior Manager in the same team as Tushar. I'm focused mainly on the product and marketing aspects of OpenShift with the focus on workloads and
Starting point is 00:01:19 data analytics and AIM are the key workloads that I focus on. I'm based out of the Raleigh area. And you can find me on LinkedIn. Just type in my first name and the last name, and I'm sure you'll be able to find me. Great. Yeah, we'll link to that in the show notes as well. And so I'd like to introduce as well as a co-host on this episode, somebody who is a little bit familiar to those of you who've been listening to our podcast for the last
Starting point is 00:01:43 few months, Chris Grundemann. Chris is going to join me occasionally as a co-host to help me have these discussions. Why don't you say a little bit about yourself, Chris? Yeah, thanks, Stephen. I'm excited to be here. As you said, my name is Chris Grundemann. Folks can find me online at Chris Grundemann on Twitter and look around at all the different hats I wear in addition to the new one of co-host. Excellent. Yeah, it's great to have you. So on this podcast, we try to focus on practical applications for AI. And I think that this is one of the things that separates us, you know, topically from maybe some of the other podcasts where they sort of geek out about the models and the, you know, sort of academic aspects of it.
Starting point is 00:02:25 You know, we're practical people. We're looking at enterprise applications. And one of the most important elements of enterprise applications is basically the infrastructure and the sort of operational framework that supports these applications. Now, as you heard, Red Hat has OpenShift, which is essentially a hybrid cloud platform. I'm wondering, Abhinav, if you can kind of talk to us a little bit about what are the aspects of artificial intelligence that require special infrastructure? Or in what ways does artificial intelligence applications differ from more conventional enterprise applications when it comes to the sort of the supporting platforms? Yep, that's a very good question, Stephen. So if we look at the AI ML, you know, the lifecycle, right? So first you have to gather and prepare data, then your
Starting point is 00:03:17 data scientist is going to build the models. And then after that, data scientists have to work with the app dev folks to be able to to integrate the models into the app dev processes and then at the end of the day those models have to be built out into like rolled out into the production as part of the intelligent software application and it's a very like a cyclic process because you have to keep your models up to date all the time in the production so that they continue to make the right predictions. Now, what all this means is like, there are a lot of different personas involved, the data scientists, data engineers, DevOps engineers, and
Starting point is 00:03:53 so on. So all these personas use a bunch of tools, right, that they need to get their job done. And also, like you need the tools to automate the whole process, right, the whole kind of lifecycle, like being able to build out the DevOps for the machine learning operations. And then a lot of these activities are also very computationally intensive that can use a lot of compute power, right?
Starting point is 00:04:19 And also, so it's all about data, right? Because you have to get the meaningful insights from the data, and you need a lot of data to be able to train the model and so on. So if you look at it from the infrastructure perspective, let's say a customer may do the data gathering and preparation say at the edge, but then the rest of the pieces of the puzzle, they
Starting point is 00:04:43 may do it in the data center and or in the cloud. So what they really need is a hybrid cloud platform. So that way they have a consistent way to standardize and run all these different tools that the persona may need. And they may need a self-service access. So a hybrid cloud platform should have the self-service capabilities to be able to host
Starting point is 00:05:06 and be able to lifecycle manage all the software tool. And then you also need to manage your data, like say with things like Data Lake, to be able to ingest and store all the data. And then you have to pre-process the data and so on. So you need a bunch of kind of tools in that. Then you also need the data for your app dev the lifecycle as well. So yeah, it's going to be like a lot of data management as well as like a platform that can provide you a consistent way to host all these tools and lifecycle manage them in a consistent way throughout the AI ML lifecycle. That's really interesting. I mean,
Starting point is 00:05:42 there's definitely a lot to it sounds like AI, product development. It's a little bit different than your typical software development. Taking a step back maybe, I was really interested in how prevalent this really is. And I did notice in the recent 2020 version of the Red Hat Global Customer Tech Outlook Report that I think it was 30% of your respondents said they were planning on using AI ML over the next 12 months. That seems really high. It came out as I think the top emerging technology workload. Do you have any insight into what those projects are? What are the use cases for AI ML that are being developed so hotly right now? Yeah, sure. I can start and maybe Tushar can add in as well. So what we're seeing is, right, in terms of the use cases, there is a lot of traction in the different industry verticals, be it financial services, manufacturing, the automotive industry, be healthcare, and so on. what the customers are looking to do is be able to provide a better experience to their end customers be able to increase the revenue save cost be able to automate the business processes and be able to build out the digital services that can make them more competitive and especially now
Starting point is 00:06:58 in the tough times that we are in the customers are what we're seeing is that they are like ramping up the investment in the digital transformation, right? And AI is a key part of it. And maybe Tushar can expand on some of the use cases that we are seeing across the different verticals. Yeah, I mean, think about financial services, banks. Think about a way to quickly assess your credit risk so that they can do loan application processing rather quickly. They are augmenting. I mean, this is not as if it's new, but they are augmenting. I mean, this is a process that they've had. It has been rules-based, right? A lot of rules-based. And so they're augmenting the rules-based systems with AI
Starting point is 00:07:49 because AI provides insights, as you know, with large amounts of data that maybe just simple rules doesn't cover or rules rather become sometimes harmlessness of stuff. So unpacking that with machine learning makes a lot of sense. If you think about the health care industry, and we are in the middle of a global pandemic, and if you look at how customers and especially, I mean, there are fundamental research being done
Starting point is 00:08:24 to do contact tracing and being able to use that data to do predictions on where there could be a flare-up, for example, and what proactive measures you can take. It's being used to do vaccine research, AI, and that is a well-established way to approach this is immunology and the use of AI and machine learning techniques in immunology and therefore in vaccine research is happening. If you think about other fields such as energy, being able to find new sources of energy and being able to map, a way to find that using geospatial mapping is another area where we are using, machine learning techniques are being used. Other interesting things like at airports, although with the pandemic, the airports are not that crowded,
Starting point is 00:09:33 but if you go back in time before the pandemic, how do you reduce congestion? How do you do basically logistics of an airport, logistics of your supply chain, that's another area. Robotics obviously is a huge area where machine learning and self-driving cars, there's self-driving trucks. So there are lots of applications that we are seeing obviously in this space of AI and machine learning techniques. Well, it seems to me that one of the things that kind of ties a lot of these applications together
Starting point is 00:10:12 is basically the same thing that demands the creation of a sort of next generation hybrid cloud type infrastructure. So essentially we've talked about scalability, we've talked about, well, massive scale, with data sets, we've talked about flexibility and accessibility outside the data center. It seems like this is really an ideal for that, I guess what's called the hybrid cloud infrastructure. I mean, that's kind of a funny term, cause it's like, wait, what is the hybrid cloud?, right? I mean, that's kind of a funny term because it's like, wait, what is a hybrid cloud?
Starting point is 00:10:45 But, you know, whatever that is, it seems like this is the application for it. Am I wrong? Yeah, I mean, hybrid cloud for us, obviously, means a lot of things. And I think it's a good thing to define it. You know, for, I mean, you know, we think about hybrid cloud as public clouds, data centers,
Starting point is 00:11:09 private clouds, and all the way to the edge, right? So hybrid cloud definitely means a lot of different things. And, you know, there are several places where, you know where hybrid cloud makes sense when you think about it. Think about all the IoT devices and devices at the point of presence and they producing data. And either you need to process that data locally and produce reserves right away away or you ingest it and use that for the basis of your
Starting point is 00:11:50 decision making is one way to think about it. There is your data center, either your cloud, it can be a private cloud, it could be a public cloud, you could have more than one public cloud. And so, you know, for example, a good one is, I remember I told you about that example of an airport where you can imagine that there are lots of, you know, cameras at the airports, which can look at congestion happening both on the tarmac as well as inside the airport. And what you're doing is effectively your cameras are processing video, capturing video, which is then being, then you want to analyze it. Is it a long line, short line, are there bags, this, that and everything else? So that's a great example of where you could actually have data that is coming in. You need to process right away at the point of presence. But you would have trained that model using, you know, this is not rocket science. This is image processing.
Starting point is 00:12:56 So you might have trained that with public data on a public cloud because you have access to elastic compute on public cloud, much more so than a private setting. So that's a great example of how a hybrid cloud, that's what we mean by hybrid. It doesn't have to be one continuum. A classic example, I mean, everybody thinks about hybridizing this burst, you know, burst computing,
Starting point is 00:13:21 and that's definitely one use case, but that's not in itself. You know, there are different scenarios in play here when it comes to hybrid. And something I wanted to add was that say, like say if I am the infrastructure guy, right? And if I have to like look at all the pieces of the AI architecture, right?
Starting point is 00:13:40 So for the example that I short highlighted, so parts of the architecture are going to be on the edge, say for the data acquisition and then streaming. And then part of the architecture is going to be in the data center. And then the data scientists may say, OK, I want to do the model build out in the cloud. So the architecture is going to span across all the three
Starting point is 00:14:04 footprints, like a public cloud on-prem and at the edge so that could and if they don't put in like a lot of thought into it like upfront it could add a lot of complexity because each of the cloud provider has their own way of doing things on like the tools and processes that you have to run then you have your own on-prem historical like all the legacy stuff and the processes you build out. And then now you have this new edge locations or the IoT locations, right? And you need to have a footprint over there as well, the connectivity between all these sites. So what we are seeing is that the customers have to be given a thought, right? Like as to how I build out
Starting point is 00:14:43 the whole thing and and how I can simplify my day-to-day operation so that way I don't have to learn in a lot of kind of different processes and so on say to manage all these different silos and that's where the value of hybrid cloud is going to come into play like if I'm able to consistently build out the environment have the same set of tools, the same kind of storage infrastructure. I build my data pipeline. So being able to flesh all that out upfront
Starting point is 00:15:13 and by using the same kind of tools as much as possible at all these different sites is gonna be key for success. Yeah, that's interesting. I mean, definitely, there's obviously some complexity here in a number of different aspects. So I'd like to dig a little deeper into what we were just saying there. And maybe preface that, I just saw pretty recently anyway
Starting point is 00:15:36 a statistic from VentureBeat that said something like 87% of machine learning products never make it into production. And then pairing that with, I also read something from Harvard Business Review that said, they expected the first wave of corporate AI would be bound to fail.
Starting point is 00:15:52 So I wonder from each of your individual, but also Red Hat and more generally, perspective, what are these execution challenges for AI products? I'm guessing they span architectural, cultural process, but from your perspective, where are those pitfalls and where the dragons live? How can we avoid those? Yeah, so what we're seeing is that in terms
Starting point is 00:16:11 of the key challenges, right? And the number that you mentioned, right? Because the job of a data scientist is mainly to focus on building the models, right? And be able to make sure the models have the right kind of all the accuracy and they do a lot of experimentation but then they may not be as kind of concerned in being able to deploy the model as part of the app right that is going to get rolled out into a production site right so
Starting point is 00:16:39 being able to operationalize it can be a challenge as. Like if you don't put your model into the app dev processes and be able to use like all the DevOps kind of principles that you've kind of built out for your organization. So that part is gonna be key. And the second key challenge that we see is like lack of talent, right? Like if you, so the life cycle that I talked about, right? So there are different personas involved there.
Starting point is 00:17:05 If you don't have the right kind of talent to be able to manage through the process, that can be a challenge. And also if you don't have the automation kind of built in to move on from the first step, the second step, the third step, and so on. And the fourth one is if all the different personas, like say if I'm an infrastructure guy, if I'm not able to meet the needs of my data scientist, the data engineer, and the app dev folks with all the software tool chain of their choice, as well as the infrastructure resources in like a seamless way, so it means
Starting point is 00:17:36 that those guys are waiting on me and that could add in like a lot of time and lead to a failed project. So that's where being able to provide the self-service capabilities as part of the hybrid cloud platform that we're talking about is providing a lot of value for the customers to speed up their whole AIML lifecycle and be able to deliver the real value for the customers. So it seems like a lot of the open source projects, though, that are addressing some of these challenges that you mentioned, it seems like these are things that are happening, well, I getting more involved in the production of machine learning applications, the packaging, the feature stores, things like that? Are there certain projects that you're excited about
Starting point is 00:18:36 or areas that you're contributing? I can take that one. Yeah, I mean, absolutely. I mean, from a Red Hat, obviously, both culturally, as well as from a strategic perspective. We love open source. And that's central to our strategy, grow, sustain vibrant communities, open source communities. We want to bring that. And obviously a lot of work. This is nothing new.
Starting point is 00:19:13 I mean, AI and machine learning has existed for the past several years, and most of that innovation is happening in open source. So if you think about the kind of the layered cake up to which, right? So from a data science perspective, from a data scientist perspective, I mean, you're thinking about machine learning frameworks like PyTorch or TensorFlow, et cetera. And so we definitely are contributing there in terms of optimizing them for different Linux merchants or Linux distributions for different kinds of, we're working with partners such as Intel and NVIDIA, for example, to do things like how do you
Starting point is 00:20:01 optimize, let's say, TensorFlow, how do you optimize PyTorch for GPUs? We are working with Nvidia, and Nvidia actually has this Nvidia GPU Cloud, which has some of these frameworks. So then there is that aspect of the, what I would call the layered cake. Then there is kind of the Kubernetes itself and containers.
Starting point is 00:20:29 From a containers perspective, we invested in what is known as Cryo, which is a container engine, and which forms the bedrock for all our OpenShift platform now. And we continue to do that. And for example, Cryo has these plugin mechanisms in which GPUs and FPGAs can be added easily. Similarly, Kubernetes itself has simple techniques such as device manager, which allows you to recognize some of these more excellent, what I call accelerators like GPUs, as as an example or FPGAs as an example. But more importantly, I think more advanced features such as, you know,
Starting point is 00:21:11 NUMA event scheduling, because again, when you get into doing model training and influencing and doing this at scale, as we know, machine learning is very hungry, so these things do matter a lot. And so that's the kind of what I would call the middle part of the layered cake. And then, you know, I think one of the things really we talked about is how do we bring and we talked about how automation and acceleration is important, Making it part of workflows or making it part of application workflows is important, and that's where we're bringing DevOps to it. To that extent, from a DevOps perspective,
Starting point is 00:21:53 how do you manage your end-to-end workflow is important. We have something like Kubeflow, so as many of you already know, and so we are contributing to Kubeflow with others. The other one really is if you already know. And so we are contributing to Kubeflow with others. The other one really is kind of the complementary to that is the idea that, OK, now how do you manage the lifecycle of a container itself
Starting point is 00:22:16 or an application? Let's say you build a model or you have some data sources, you build a model, and now you want to put it into some kind of a microservice. And that whole thing, orchestrating that whole end-to-end workflow itself, is something that we are doing with Kubeflow and augmenting with something called Open Data Hub, which
Starting point is 00:22:36 is our reference architecture on top of that. So we are investing in several open search communities, including things like, as I said, Open Data Hub, Kubeflow, the upstream machine learning communities like TensorFlow, PyTorch, and some of the data governance aspects of this too. We're looking at things like we have, how do you, for example, use something like we're looking at how to use, for example, OPA for doing policy governance around data. So a lot of exciting things to look forward to in this space for us. Yeah, one more thing I would like to add in here is that there is a big ISP ecosystem play as well for us because like a lot of customers, they want to use a fully so that for the customers, it becomes like easy to be able to deploy and lifecycle manage the software tooling of their choice on top of OpenShift, right?
Starting point is 00:23:52 And all this is done based on the Kubernetes operator framework, right? Think of it as the automation, to like a push button automation to deploy and be able to upgrade the software like as and when needed. And you can do a lot more with that as well. Yeah, so that's where we can be kind of working with like a lot of different ISVs and like IBM has Cloud Pak
Starting point is 00:24:15 for data and we work with Microsoft, Cloudera, like H2O.ai, Selden, Starburst. Yeah, and the list goes on and on. Like we have a full, yeah, if you go to openshift.com forward slash like AI-ML, we have a logo wall over there of all the ISVs that we have partnered with to make sure that whenever the customers kind of choose that, okay, yeah, so I like open source, but at the same time, I want to have an ISP software as well. So the experience that they get by deploying those on top of the OpenShift hybrid cloud platform
Starting point is 00:24:50 is the best in class. And the system admins don't have to spend a lot of time installing or troubleshooting the software, because we've kind of codified a lot of the day one to day two operations for these software tools that the data scientists, data engineers, and the app dev folks use to be able to operationalize the machine learning lifecycle. That makes a lot of sense and it kind of sounds like in many ways anyway it sounds like Red Hat tends to be at the center of this web of open source tooling and things that are available.
Starting point is 00:25:23 You know from that perspective of kind of being in the center of all this, do you see any meaningful differences between say, you know, and I'm talking about, you know, tooling as well as use cases and execution challenges, right? Any differences between consumer companies versus B2B companies or startups versus large companies? Is there a difference depending on who the company
Starting point is 00:25:45 is that's going down the AI ML path that they should pay attention to? Yeah, so I can start here. Yeah, that's a very good question, right? And we see the nuances across the different verticals as well. So if you're talking about a startup, right, that's in the Bay Area and so on. So for those kinds of companies, so typically they may start small and they may say that, OK, I'll just use a service, the AI service that is actually there in a public cloud. And they should be good with that. But say if you are a financial services organization
Starting point is 00:26:17 or a manufacturing company, so you have a lot of data and that's all on-prem, and you have to build out the capabilities. In some cases, people may or may not have the talent to be able to execute on the project, because it's like a digital transformation project, and it takes a lot more than just technology. It's a people and process,
Starting point is 00:26:39 the culture transformation as well. I think the organizations that have the buy in at the top level that okay, yeah, that we kind of have have AI as part of the initiative, right, and there is a sponsorship at the top most level. And they kind of clearly define a project that okay, we'll kind of start with the pilot on x, right, and then go from there, and they kind of put the funding on that. So in those cases, we've seen like a lot of success as compared to where like a system admin or the engineer may say that okay
Starting point is 00:27:10 i'll build a platform and then it will kind of go and shop for use cases so those kind of projects so they take a long time where okay i'll build and they will come so those are kind of hard to execute so so that's what i would say that the buy-in from the top and being able to have the consulting as well, being able to use a system integrator and so on to kind of guide you, like teach you and to kind of make sure that you are successful with the daily deployment.
Starting point is 00:27:40 So that way, like you can run it on their own once they've kind of taught you and provided you all the training. And we work with a lot of those as well, the ISPs and the GSIs. I wonder if you can give some very specific examples, since I know that a lot of people that are listening here are maybe newer to AI or kind of seeing these things
Starting point is 00:28:02 coming into their enterprises. Could you give some just specific examples of ways in which you are supporting very specific customers to do specific ML things in production? I think a couple of examples. it earlier, but definitely HCA Healthcare, which is in the business of obviously providing healthcare. And one of the things that they have worked on is, how do you reduce, I mean, their fundamental challenge really is how do you reduce the occurrence of sepsis? As you know, sepsis can, you know, it obviously is, they want to reduce the occurrence of sepsis in their hospitals.
Starting point is 00:28:56 And to that extent, they were able to utilize the clinical data that is coming from the hospital from the point of presence and being able to use machine learning techniques both from the data that is collected, being analyzed, models being created, and then fed back to the point of present system so that they're able to then accurately predict or as accurately as possible to predict the occurrence of sepsis in their hospitals so that then they can address that ahead of time. As we all understand, you know, any disease, but certainly sepsis, the earlier you can catch it, the better outcomes you can, the better you can treat and the better outcomes you can have. So that's a great example. They have similarly done something in this age of pandemic around COVID also. But if you want to look at a different
Starting point is 00:29:53 use case, I think, you know Canada, they have, for example, they have a platform called Borealis, which is their AI platform. And to that question, to that extent, they created this Borealis platform because they wanted to do applied research in their field and in the financial services industry. And one of the things that they, and they published a lot of meaningful research in this space and what they needed really was a platform which can take advantage of things like the Nvidia GPUs that I was talking about earlier.
Starting point is 00:30:43 And that's an example of a bank that has created a gpu farm with self-service capabilities with cloud-like capabilities using openshift therefore their data scientists therefore have access to gpus in a shared environment right i mean gpus are, what I would call a precious commodity. So you want to be able to share it with a bunch of other data scientists. So when you want to use it for a model training, you use it in a kind of self-service way on your job or whatever model you are training.
Starting point is 00:31:17 And then once you're done, it can be released back into the quote unquote cluster, and others can use it, et cetera. So that's a great example of how RPCS is used. Abhinav, who did we miss? Yeah, I can talk about BMW as well, right? BMW, the car company. So what they're doing is, so they want to speed up autonomous driving initiatives, right?
Starting point is 00:31:45 And they have all these kind of cars in the field that are collecting a bunch of data. And then it's going back in like a data platform. And that's where we partnered very closely with like our friends at DXC, right? They build out a data platform that has like thousands of cores, right? And they have GPUs in there as well. And they're able to do a lot of the machine learning and then they're able to update the software on these self-driving cars that they have.
Starting point is 00:32:13 So that way they're able to more accurately predict, okay, what's a camera or like a traffic light, if somebody's at the road, if I'm at the intersection and so on. So that's one of the key use case. And then speaking of the oil and gas industry, right, that's where we've seen the organizations like Exxon Mobil, right? So they're able to optimize like all the aspects of the oil and gas exploration, the refining operations, as well as the downstream functions. So we have a lot more use cases and we can go on and on,
Starting point is 00:32:48 but I think that this would be a good, yeah, a good start. That's really what I was looking for because, you know, again, the listeners here, I mean, this is something that they're excited about. This is something that they're getting involved in. And it's good to know that this is, that this technology is being practically deployed in many different industries in many different ways. You know, and I'm also, you know, kind of tying this back to the discussion that we
Starting point is 00:33:12 had about containers and containerization. It seems to me that these technologies are just, you know, tightly linked. And, you know, so we'll be seeing quite a lot more of that. Well, before we go here, does anyone want to chime in with sort of a last take on these things? Let's, I'll put it back to Tushar again and Abhinav if you wanna say one last thing, and then Chris, you can kind of sum up
Starting point is 00:33:37 the whole conversation. Yeah, I mean, I think from a summarization perspective, I think we see a lot of excitement in this area. I mean, the way we have tried to, we are going in a direction where we are enabling our customers and our users to build AI, and we encourage our customers
Starting point is 00:34:02 and those who have not gotten there to think about this as something of a platform. They need to think about this as a platform. What does their AI platform look like? We think it should be open. We think it should be choice. We think it should be something that accommodates the modern realities of what we call the hybrid cloud, which effectively means, you know means public clouds, private clouds, data center, edge, et cetera. And we encourage people to think about it more proactively rather than reactively. And so the hybrid cloud in some ways is, it can be something that you think about proactively
Starting point is 00:34:46 and arrive at a more proactive solution, or you might just get thrown into that situation. But either ways, I think this is important to think. You know, and then the other part of it really is just in terms of, if not a learning mode, we have a great couple of resources. One is called openshift.com. And there we have topics and one of them in AI and machine learning. And you will find a lot of this information there. The other is if you want to get some hands-on, that is learn.openshift.com.
Starting point is 00:35:25 If you go there, you'll find a number of tutorials, you know, one specifically dedicated for AI and machine learning. There you can see how you can actually create Jupyter notebooks if you are a data scientist as an example, and, you know, start actually either importing or start typing your machine learning Python code
Starting point is 00:35:49 right in there. Or you could use things like, how do I do a DevOps cycle with machine learning? So I hope you can take advantage of that. I mean, you can always reach out to us also and take advantage of that. I mean, you can always reach out to us also and take advantage of that too. Yeah. And something I want to add is, so the value of containers and Kubernetes and DevOps, right, that has actually helped speed up like a lot of typical app dev projects is extremely
Starting point is 00:36:18 compelling for the AI ML projects as well, because the AI ML projects, so they include the app dev aspect, but then there is a lot more in terms of the data science, being able to build out the models, train the models, or upfront do the data engineering aspects of it, like the cleansing of the data, gathering of the data. That's where like the value of like in terms of the agility, the scalability, the cross-cloud portability, flexibility, like all those value proposition that I've held
Starting point is 00:36:50 from the continuous perspective for a typical app dev is actually helping the customers to fast track their AI projects as well from pilot to production. And we are seeing like a lot of customers in the market in terms of different industry verticals on the AI projects and they're being successful with these. Yes, I mean definitely one of my big takeaways today is you know how much AI development is already happening and how that continues to accelerate
Starting point is 00:37:20 and then of course you know the complexity that goes into that. I believe that developing AI projects is something a little bit different than your typical software project. And so it does make a ton of sense to me to have something, you know, very powerful, flexible, agile platform to build on top of. I think, you know, when you're digging into the complexities of AI and machine learning and all that data and all the other resources you have to put to bear, you know, having to worry about the infrastructure it's built on seems like the last thing you'd want to do. Absolutely. So thank you, everyone, for joining us today. Chris, thanks for joining me as a co-host.
Starting point is 00:37:54 And of course, Tushar and Avinash, thank you for joining us here, as well as at AI Field Day. I'm going to give a little shout out there. If you go to techfieldday.com, you can learn more about our AI Field Day, which is basically this, except for three days straight. So there's a lot of video online and I think we're looking forward to the next one as well. So everyone, can you just quickly jump in and tell us where we can connect with you and follow your thoughts on enterprise AI and other thoughts. Abhinav, let's start with you. Yeah, sure. So I'm more active on LinkedIn. Yes, if you go to LinkedIn and type in my name, like A-B-H-I-N-A-V.
Starting point is 00:38:36 My last name is Joshi, J-O-S-H-I. So I'm extremely active there. So I have a Twitter account as well. And I try to be fairly active in there. So yeah, feel free to drop me an email, like my first name dot, like last name at redhead.com. And I'll be happy to connect with you. And let's have a deeper discussion.
Starting point is 00:38:57 So this was great. Thanks for having us on here, Stephen and Chris. Yeah, you can see LinkedIn Tushar.Kkutarki i guess it's tashar kutarki i mean there's not that link to share kutarki so you'll find uh you'll find me easily uh i'm on twitter at tkutarki uh or you can send me an email at tkutarki at redhead.com as well yeah and i'm on twitter uh at ch Chris Grundemann or my website is just chrisgrundemann.com and from there,
Starting point is 00:39:26 you can kind of branch out and then see all the things I'm working on doing. So happy to connect on this or any other topic as well. Great, thanks a lot. And I'm at S Foskett on Twitter and you can find me everywhere.
Starting point is 00:39:37 I'm not the only Stephen Foskett, but I'm the only one that's me. And thank you for joining me and listening to the Utilizing AI podcast. If you enjoyed this discussion, please do subscribe, rate, and review the show. I know every podcast says that, but it really does work. And please do share this show with your friends.
Starting point is 00:39:53 Share it on LinkedIn. Share it on Twitter. And share it in your other communities where folks talk about enterprise AI. This podcast is brought to you by gestaltit.com, your home for IT coverage across the enterprise. For show notes and more episodes, go to utilizing-ai.com, or you can find this podcast on Twitter at utilizing underscore AI. Thanks for listening, and we'll see you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.