Orchestrate all the Things - Red Hat and IBM venture into AIOps with open source Project Wisdom. Featuring Tom Anderson, Red Hat Vice President & General Manager for the Ansible Business Unit

Episode Date: October 19, 2022

AIOps is what you get when you combine big data and machine learning to automate IT operations processes, including event correlation, anomaly detection and causality determination. At least, tha...t's how Gartner defines AIOps. Based on this definition, as well as coverage of vendors that have products they label with the AIOps moniker, you'd be inclined to think that AIOps is mostly about anomaly detection and remediation. But what about provisioning, configuration, deployment and orchestration? These are all essential parts of IT operations which have not received as much AIOps attention. They also happen to be at the core of Ansible, Red Hat's open source IT automation tool.  Now Red Hat is embarking on a new direction for Ansible with Project Wisdom, aiming to take automation to the next level in collaboration with IBM Research. Red Hat refers to Project Wisdom as the first community project to create an intelligent, natural language processing capability for Ansible and the IT automation industry. We connected with Red Hat Vice President & General Manager for the Ansible Business Unit Tom Anderson to discuss Project Wisdom's premises, status and trajectory.  Article published on VentureBeat

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the Orchestrate All the Things podcast. I'm George Amadiotis and we'll be connecting the dots together. AIOps is what you get when you combine big data and machine learning to automate IT operations processes, including event correlation, anomaly detection, and causality determination. At least, that's how Gartner defines AIOps. Based on this definition, as well as coverage of vendors that have products they label with the AIOps moniker, you'd be inclined to think that AIOps is mostly about anomaly detection and remediation. But what about provisioning, configuration, deployment,
Starting point is 00:00:37 and orchestration? These are all essential parts of IT operations which have not received as much AI Ops attention. They also happen to be at the core of Ansible, Red Hat's open-source IT automation tool. Now Red Hat is embarking on a new direction for Ansible with Project Wisdom, aiming to take automation to the next level in collaboration with IBM Research. Red Hat refers to Project Wisdom as the first community project to create an intelligent natural language processing capability for Ansible and the IT automation industry. We connected with Red Hat Vice President and General Manager for the Ansible Business Unit, Tom Anderson, to discuss Project Wisdom's premises, status and trajectory. I hope you will enjoy the podcast. If you like my work, you can follow Link Data Orchestration on Twitter, LinkedIn, and Facebook. So my name is Tom Anderson, and I work at Red Hat
Starting point is 00:01:32 where I lead the Ansible business unit. Ansible is our enterprise automation platform. I've spent the last nine or 10 years now at Red Hat, but have a very, very long career in data center automation, data center management, IT management, and that background. And I took over leading the Ansible business unit about 10 or 11 months ago, but I've worked around Ansible and our customers for a long time. Okay, great. Thanks for the introduction. And well, the next reasonable thing to ask you would be, obviously, what is Ansible? I know it's a very naive question in a way, but since not everyone who may be listening is familiar with Ansible, it's a good place to start? Yeah, so Ansible is an open source technology platform that allows folks to automate into a common
Starting point is 00:02:32 language access to their underlying IT infrastructure and applications. So instead of let me put it into a practical example. So I'm trying to automate my Cisco networks. And instead of having to learn my Cisco automation language that crosses and abstracts all of those various domains and technology platforms that IT infrastructure owners have to deal with and application developers have to deal with on a day-to-day basis. So it simplifies the provisioning and operations of those underlying IT infrastructures by abstracting them into a common language.
Starting point is 00:03:28 Okay, so I'm guessing that the way that Ansible does that is by providing a sort of intermediate layer, what's commonly known as middleware, I suppose, that sort of does this abstraction that you just talked about. And then I think the way that people are able to configure Ansible is by using something called YAML files, so a specific configuration language that people can use to interact with Ansible. Is that correct?
Starting point is 00:04:02 Yeah, that's it's well put, George. So what we call modules, right, those are the integration pieces that talk to the underlying APIs of those infrastructure pieces that abstracts them into that common Ansible language. It is written in YAML, and we call them playbooks. So playbooks and roles are things that allow you to build workflows that interact with these underlying infrastructure items and application items using that common language
Starting point is 00:04:31 that, again, gets put into what we call a playbook, and it is in the YAML format. Okay, I see. So now that we have at least a general idea of what Ansible is and what it does and a little bit of how it does what it does. Let's talk about the actual news. So what you're about to announce called Project Wisdom. So in your own words, how would you describe what Project Wisdom is in a nutshell? Yeah, so Project Wisdom is, you know,
Starting point is 00:05:06 what I would call bringing applied AI to automating infrastructure and application deployments. So using natural language processing to turn English commands into that automation playbook so that either the expert on automation can be more efficient or a newcomer to automation and a newcomer to Ansible
Starting point is 00:05:31 can get up to speed faster. So instead of having to know all the peculiars and specifics about an underlying infrastructure, I can simply type in an English command in natural language. I would like to deploy a Postgres instance on AWS. You type that in, and out what pops is a fully structured playbook,
Starting point is 00:05:53 syntactically correct and efficient, and it's done 95% of the work for you. You just go in and add whatever credentials you might need to achieve that piece of work, and you're up and running. So again, it makes experts more efficient because it allows them to do more, and it brings new people into the environment. I may know Postgres really well, but I don't know Ansible. Or I may know Ansible really well, but I don't know Postgres. Whatever that combination might be, it allows you to just use natural language to produce that automation content that gets you up and running quickly. Yep, indeed. It sounds like it's really a time saver for people. And also it makes the interactions with the system much, much easier. So would you say that it entirely removes the need for people to learn how to work with
Starting point is 00:06:50 YAML or is the outcome of what Project Wisdom can give you, does it still need a little bit of configuration and messing around just to get it to do exactly what you need to do? Yeah, I would love to say that out of the box. It's 100%, but that wouldn't be accurate. I would say that you still need some knowledge to do this, but what we're trying to abstract is that you have to be an expert. We're kind of, this is the wrong word, but democratize automation, if you will. Push automation out from those subject matter experts,
Starting point is 00:07:25 of which there are not that many, and they're being asked to do more and more. So how do we kind of scale those subject matter experts on those environments? Again, on those underlying my storage, my cloud, my network, my edge, my databases, my storage, whatever it might be, how do we kind of accelerate their ability to become more efficient? And they do that by having a trained model that they worked and helped to train and put that model into the hands of people that can consume it and use that natural language processing to go ahead and create automation that meets the needs of those underlying infrastructure owners or those domain owners.
Starting point is 00:08:04 They know that they can push out automation into the hands of less experienced people and it will be creating syntax, not only syntactically correct, but efficient and from a compliance perspective, correct for their environment. I see. Another question I had was whether this interaction, these commands, let's say, in English that people are able to use to interact with the software, whether that's done using spoken language or it's actually done by typing in, or maybe both? Today it's text. Out of the gate it's text, but it would be a pretty short trip to voice. I mean, it's not a far trip now from voice to text translation,
Starting point is 00:08:57 so I could see this moving that way really quickly. But today, out of the gate, text translation into natural language text into playbooks. To be honest with you, I don't know, I'm trying to picture system admin types just sitting in data centers and giving voice commands to Project Wisdom. And I'm having a little bit of a hard time, but, well, you know, people use software in different ways, and maybe some of them would like to use it that way. Who knows? Yeah, I think we'll probably take off, you know,
Starting point is 00:09:33 95% of the demand with the text. I agree with you, George. I still have a hard time seeing system admins, you know, speaking into a thing to say, you know, provision this database for me. But, hey, you never know. Yeah, indeed. So let's talk a little bit about what's under the hood
Starting point is 00:09:53 and what's used to power this impressive sounding feed. So I read a little bit of the press release that you're about to send out and I found that some interesting things mentioned in it. So it sounds like this is a collaboration with IBM and there is a language model that has actually been developed by IBM that's used to power that. And furthermore, it seems that what IBM did in order to train or to fine-tune this particular model to be able to basically write code, because that's what it comes down to,
Starting point is 00:10:39 is that they used a lot of open source code and actually they did some things that were pretty interesting in the way that they fed this data to the system. So I wonder what, first of all, before I start bombarding you with questions, I wonder how familiar you are with the actual underlying technology with the AI model that was used? Yeah, so kind of medium familiarity, George. And first of all, yeah, this is a close collaboration with us and IBM Research. The vice president of IBM Research, Richer Piri, and his team have worked with us over the past year on this, where they bring, you know, obviously tons of experience on the AI front
Starting point is 00:11:27 and on the natural language processing front and on the large language model front. And what we bring is a lot of the expertise around the use case itself and helping train the model with that experience and that content and the repositories that we have. So it's been a very close collaboration with IBM. And so I'm not going to put myself out there as an AI expert, but I know enough. I think I know enough about the architecture to be dangerous. Okay, that sounds promising. So I guess that the way probably must have worked is
Starting point is 00:12:05 that what well actually before I start a venture into speculation, I wonder if you if you know whose whose idea was it to do that? I mean, who initiated this project? Was it someone from Red Hat? Or was it someone from from IBM? Or was it I don't know, some kind of joint venture brainstorming? It depends on who you talk to. No, I'm just kidding. Let's share the credit.
Starting point is 00:12:34 Like I said, there's an individual on my team who's passionate around AI and driving automation use cases kind of into the next gen here. And we have developed, we Red Hat have developed a very good and strong relationship with the folks at IBM Research, the labs. And so this was an opportunity for these two folks,
Starting point is 00:12:57 Puchir and Alessandro Parilli on my team to connect and to have a conversation and to start to initiate this work. And like I said, it started about a year ago where the idea started getting kicked around and then kind of kicked into full gear as Ruchir took his team, took a big chunk of his team and dedicated them to this work along with our engineering folks and business unit folks
Starting point is 00:13:20 to kind of drive a demonstrable model that is what we're talking about and showing at Ansible Fest this week now. Okay. So you just referred to IBM Labs, and I looked it up a little bit and saw that IBM Labs has something called AI for Code. It's one of their projects. And like I said, I think it goes a little bit beyond the scope
Starting point is 00:13:45 of this conversation, but they did a number of interesting things in how they trained this AI model that they created. And to be more specific, it looks like
Starting point is 00:13:57 an interesting thing that they did, which I think is not, to the best of my knowledge, at least, is not something that people who train these large language models usually do when it comes to code. They added more context than usual,
Starting point is 00:14:12 so they didn't just feed the code to the model, but they also they specifically took code that was part in, that was produced as part of coding competitions and they also fed the description of what the code was meant to achieve and some documentation and things like that. So supposedly that has made their model a little bit more sensitive, let's say, to the specifics of coding. So I'm guessing that probably what must have happened was that the people from IBM came with a pre-trained model and then they work with you to fine-tune it to the specific needs of Ansible. Is my speculation anywhere near the truth?
Starting point is 00:15:03 It's dead on, George. That's exactly what happened, which was it wasn't so much about turning natural language into a piece of YAML, but it was about turning natural language requests into not only syntactically correct, but specifically correct for Ansible, right? So Ansible is just, it's written in YAML or it's expressed in YAML, but it's really about all the specificities around the Ansible language and the specificities around specific integrations
Starting point is 00:15:30 into those underlying things, i.e. how does it interact with AWS? How does it interact with Azure? How does it interact with on-premise stuff? How does it interact with specific application APIs? So a lot of that experience, that training of the model was more than just dumping out,
Starting point is 00:15:47 you know, and I'm not trying to minimize this, but more than just dumping out a piece of YAML and really dumping out a piece of Ansible, right? That's the first use case that we decided to apply it to was to Ansible itself. So I know we've kind of bounced back and forth between the term YAML and the term Ansible.
Starting point is 00:16:03 Ansible is a language, it is expressed in YAML. This was turning natural language, using NLP to turn natural language into Ansible as opposed to YAML. It's just expressed in YAML. Yes, I think it's an important distinction to make, and it's kind of natural that it keeps popping up. So you're right.
Starting point is 00:16:27 There's two parts in this. One is the syntactic part, let's say, so being able to produce something that makes sense, that can be parsed, and that's the easy part in a way. I think the hardest part is actually getting the system to understand what it's being asked to do. So for example, connect my Postgres instance from AWS to another MySQL instance in Google Clouds or whatever this type of thing. The system has to actually understand what does Postgres mean, what does Google Cloud mean, what do all of these things mean, and then it actually has to be able to produce the code that will make that happen, I guess. Is that part already being taken care of by Ansible?
Starting point is 00:17:12 No, so that's part of it. So it needs to produce the code that does that, but it needs to do it in a way that does it according to best practices. What is the most efficient? What is the best practice in a way of doing this? And so, you know, the first kind of phase of Project Wisdom is to be able to convert natural language into syntactically and expertly correct Ansible language code in a YAML file. The next step is around, you know, we have a, you know, about a three or four step plan for Project Wisdom down the road here, which is once you've created
Starting point is 00:17:51 that, what if I just want to take an existing playbook that someone has wrote not using Project Wisdom? I've written this playbook and I wrote it two months ago or two years ago or whenever it was. And I want to be able to put it through the system and say, make recommendations of how to make this better, how to write this in a more efficient way, how to write it in a more secure way. Let Project Wisdom take that and spit that out in a new way. Or I'm creating a playbook by hand, not with Project Wisdom. And I'm interested in finding out what's out on the internet of what, who else has done this. So Project Wisdom could go out and find content that is similar to what I am creating and compare it to what I've created and give you a side-by-side comparison of which is better. And then there's the complete inverse of it, which says, I have this Ansible playbook that somebody has written. It's written in YAML, so it's not exactly written in English.
Starting point is 00:18:43 And I want Project Wisdom to tell me what this will do. And it will spit out in natural language English. It will tell you step-by-step exactly what this playbook is doing. So we have this whole kind of roadmap for Project Wisdom around Ansible and around infrastructure automation and around application automation, deployment automation, bringing that expertise together with the folks who know how to build a model, right, to build syntactically correct the model, and then working together to train that model for this specific set of use cases. And like I said, the first one of which is Ansible at Red Hat. There are multiple use cases down the road that we could expand this beyond Ansible into other areas of stuff that our customers are interested in. Yes, actually, that's something I've been meaning to ask you about. But before we venture into that, there's something else that I'd like to get covered. So
Starting point is 00:19:50 just now you spoke of ways in which you helped fine-tune that model and I was wondering how exactly do you do it and I mean usually the way that this works is that you need to have a data set of, well, playbooks in your case. Where did you get that data set? Was it from, did you produce it specifically for this purpose? Or did you already have a repository of playbooks that you could refer to? Or was it something else? No, it's a great question. Yeah, so we have a very large repository of playbooks that you could refer to, or was it something else? No, it's a great question. Yeah, so we have a very large repository of playbooks.
Starting point is 00:20:29 Plus, there is lots of publicly available playbook content out on a site called Ansible Galaxy, which is an upstream community place where content is exchanged freely. And then Red Hat ourselves have our own pretty large GitHub repo of content that we use to help train this model. And then, of course, there's the fine tuning of the model with expertise from our engineering staff. And next, let me just kind of take the next step in this journey of what we're announcing here, which is a project wisdom, right? It's not called product wisdom. It's called project wisdom.
Starting point is 00:21:11 And like everything we do at Red Hat, this is an open source community and that we're introducing this project to the community and saying, join with us. The more people that are involved in training this model, the better the quality of the output will be. Ansible as an open source project is an enormous open source project. I think there are somewhere close to a million upstream open source projects, and Ansible from a contribution standpoint is always in the top 10 or 20 of that million.
Starting point is 00:21:39 So there are many, many, many, many, many thousands of people out there creating Ansible content out in the community and what we call upstream. And so what we're doing here at Ansible Fest this week with the announcement of Project Wisdom is we're inviting that community to join us now. We think we have a working model now. We have something that we can start to make better. Does it become a product? Maybe, but right now it's a project. And so we're going to get a lot of input into that model, training into that model by bringing thousands of people, content developers into that model or into that process to help train that model in the project.
Starting point is 00:22:18 Great. You just touched on another area that was also in my list. So terms of use and current status and this sort of thing. So, you just said that, well, it's a project. It hasn't exactly reached product status yet. So, I'm wondering if we focus on the code itself and on the model, what kind of state would you say it is currently in? So is it alpha?
Starting point is 00:22:49 Is it beta? Somewhere in between? A little better? A little worse? Yeah, that's a great question. I'd say somewhere in between, right? It's a working model now. It is working.
Starting point is 00:23:01 I can go to the model and I can type in a command and darn if it doesn't pop out really good Ansible content, right? Now, Ansible's set of use cases is so broad. I mean, people do everything, like I said, from cloud to data center automation to POC automation of devices on industrial edge or industrial floors, right? I mean, the breadth of the things that people do with Ansible to automate an infrastructure is so broad that I don't want to say that, hey, you could type in, you know, create a playbook for me that updates, you know, a medical device of some sort. I'm gapping for a specific example here, but, you know, a machine device on an industrial floor and have it pop that out today. But all of the sort of, I'm gapping for a specific example here, but a machine device on an industrial floor
Starting point is 00:23:46 and have it pop that out today. But all of the sort of popular, the most popular sort of use cases around cloud provisioning, around data center provisioning, around APIs into application infrastructures, that is very rich right now in the working model that we have, and it'll get richer over time. And we'll bring in training that model to do more and more use cases. Yeah, it makes sense. I mean, obviously, the most popular use cases are the ones that you have also more data about. So it makes perfect sense that, well, the model will perform better for those rather than, you know, some niche outliers on medical device or whatever else that may be. Exactly, exactly. And so yeah, in the open source stuff,
Starting point is 00:24:30 we don't usually call stuff alpha and beta, but if I was to kind of pick something it would be between alpha and beta. And these projects, like all open source projects, you put it out there, you ask for contributions, you get people on board, they start working with it, it gets better, adoption grows, and pretty soon it becomes something material. We expect it to do that, but I hate to predict.
Starting point is 00:24:52 That's not how we build software at Red Hat. We work within communities to catalyze these communities to join together to solve a problem, and then we take that and make that enterprise ready for our customers. But there are always the upstream pieces of this. So it's a little hard. I know in a commercial software organization I've used for many years, alpha, beta, production, or GA, whatever it might be. It's a little bit different in an open source community where it's much more of an evolutionary path versus stops on a way. Yeah, makes sense. So if people want to use Project Wison right now,
Starting point is 00:25:31 what would they actually do? Is the model... So first of all, what's the terms of use and what's the license that you're releasing this under and can people just download it and then somehow deploy it on their own infrastructure and start using it? I'm guessing it's probably not available as a service, at least at this point. It may be at some point in the future, probably, right? Yeah. So, you know, the underlying infrastructure to support this work is the compute requirements are fairly substantial. So it won't be downloadable
Starting point is 00:26:07 in the short term, right? It'll be a extension that connects to a web service or to a hosted service that allows you to interact with that service through, you know, in a typical example, if you're using VS Code, right, it would be extension in VS code where you could be typing in the commands and have it fill out the playbook for you or create the playbook for you. We're working on the licensing model right now. And again, it's just a project. So we're not licensing anything. It's an open source project with a, you know, a set of contributors, but it's built, it's built on open source componentry. So nothing in there is proprietary other than our expertise.
Starting point is 00:26:46 Obviously, that's not something you, you know, that's not, you know, our expertise that have been added to that is the value and the differentiation of it versus some piece of intellectual property. But both IBM and Red Hat are committed to having this be an open source project, a community-based project with broad scale participation. I don't think in the, you know, my just sort of selfish business aspect here is the more people that are creating Ansible content, Ansible playbooks, the more people that are using that, that provides a larger market for Red Hat to go in and offer our Ansible product to those people
Starting point is 00:27:24 that are using that project, right? So I see it more as wide-scale adoption of Ansible as a default and ubiquitous automation language for their use and making it as easy as possible for them to do that. And someday later down the road, there'll be an opportunity for us to go and offer them additional capabilities with our platform for managing and securing and scaling their automation.
Starting point is 00:27:52 That also makes sense. That's the monetizing open source playbook in a way. So yes, that makes perfect sense. The reason I was more wondering about that was basically what you also said yourself. So guessing that the compute requirements to, to run this in your own infrastructure will be pretty substantial. So I'm wondering, you know, from a practical standpoint, how can people actually use it at this point, since you know, it's probably not just the compute requirements that are substantial, but also probably the simple configuration that needs to be done in order to deploy this
Starting point is 00:28:36 and fine-tune this in your own infrastructure will be not everyone's cup of tea, probably. So how can people actually use it and by using it do all the things that you just said, produce more playbooks and find you in the model and so on. Yeah, so just give you a sense of kind of, you're right, the scale of this is such that it would be hosted and the good thing about being partners with IBM labs and IBM research is they have access to their cognitive computing cloud, you know, a supercomputer that, you know, most people don't have access to that sort of,
Starting point is 00:29:10 you know, resource to be able to do this. So we're really, really fortunate to have the partnership and relationship that we do with the good folks at IBM Research that have been working on this. So what we're announcing here at FAST is this demonstrable availability of this. And we're inviting people to kind of join us who are interested.
Starting point is 00:29:28 And over the next little while, we're going to be exposing this service. You know, it'll be, obviously it'll be a web service that they will be able to access through URL and start to use within their VS Code plugin. But we're not announcing sort of availability of that ability for them
Starting point is 00:29:46 to start using it today, but to join us in this community project, start learning about it, understanding what we need from them, what the commitments are. We need to work out, like you said, the licensing model for making sure that we're respecting their individual contributions to this in a way that makes sense. I know there's been lots of discussions in the environment about other sort of co-gen pair programming products and who owns the content and all that kind of stuff. So, George, we're being real careful with that right now.
Starting point is 00:30:14 So what we're doing today is inviting people who are interested to join with us. And then we'll make that available here over the next little while for them to actually start using it. Okay. Okay, yes. I Yes, you're right. That's also something that's been on the back of my mind. And I was wondering if you have a take on that because, as I just said, and actually I just read just before connecting with you right now, there's somebody, for example, who's taking legal action against a co-pilot because, well, they're saying, well,
Starting point is 00:30:46 you basically scraped all this open source code and by doing so you're violating some terms of use and so on. So it's new territory basically and nobody's exactly sure how these things do work or should work. So you're justified to be cautious, I would say. Yeah. You know, what's the old construction metaphor? We want to measure twice and cut once here. You know what I mean? We want to make sure we're doing the right thing out of the gate and be real respectful of that.
Starting point is 00:31:20 And so that's why we're being careful. And it is new. It's new territory for us. It's absolutely new territory for us. It's new territory for us. It's absolutely new territory for us is new territory for the industry. And so that's why we're being cautious and kind of, you know, moving kind of step by step through the store. So you're exactly right. Okay, so I take it from from what you said previously, that the then vision, let's say way for people to actually use it is
Starting point is 00:31:44 probably through plugins for their favorite development environment, right? Yes, exactly, for their favorite IDE, right? We have a lot of people that use VS Code. There's many other ones, but yeah, we already have extensions, VS Code extensions for Ansible, linting and playbook development. And so this would be a natural place for us to start that journey.
Starting point is 00:32:11 Okay, so in that respect, I was trying to categorize, let's say, where exactly, in which box, let's say, people should put this project. And in my mind mind just having read the the outline in the press release it looked on the one hand it does look a bit like you know all these so-called coding assistants that are out there but there is a difference so I think but then again by you mentioning this I'm not so sure maybe it's not that different after all the difference in my mind was that well this seemed like it was not specifically aimed at individual developers let's say it was I thought the idea was more to target well enterprises IT and from a strategist
Starting point is 00:32:59 standpoint this also makes sense because well it's it's a partnership with IBM and everyone knows you knows what a huge footprint IBM has in the enterprise. And I also read an interesting use case from IBM Labs. So it seems that they have successfully used their AI model, which underlies Project Wisdom as well, to do things as well refactoring a big monolithic enterprise application into something that can be deployed on the cloud which is more modular and so on and so forth. What do you say is the
Starting point is 00:33:39 positioning for this eventual product that you're starting building? Yeah, so for today, right, we're focused on those folks that use Ansible today. And those fall into kind of two communities. One are those infrastructure IT people, the operations teams and infrastructure owners within IT, which you referred to, which are the kind of bread and butter of IBM's business for many years and ours too. But there's also the developer community where a developer wants to write an application and deploy an application and they don't need to be, or they don't want to also be an expert on storage subsystems and security systems and load balancers and virtual networking and firewalls and all of the other things that we're asking a developer
Starting point is 00:34:29 to know when they build and deploy their application. So a lot of users of Ansible today, even before Project Wisdom are developers who are just trying to make their job of getting their environment up and running, deploying their application, deploying updates to that application, deploying the infrastructure to run that application on, use Ansible today to do that.
Starting point is 00:34:47 This will make it a lot easier for a developer to be able to deploy not just their applications, but the infrastructure that will be required to support those applications that they're building and deploying. So I see it as kind of both sides of that. Initially, out of the gate, you're absolutely right. Those infrastructure owners, those DevOps teams, those are the people that we're targeting with this. I talk to a lot of enterprise IT customers of ours, and one of the things that I hear consistently from our customers is that they have a difficult time attracting talent, right?
Starting point is 00:35:22 The number of people with the skills on the market are very rare and very expensive. They have a hard time attracting new talent and they have a hard time retaining their existing skill base and their existing talent. And so because they're being asked to do more and more, their environments are becoming more and more complex, multiple clouds, multiple data centers, edge environments. There's a lot being asked of these teams. So how can we make their jobs easier? How can we take a lot of the mundane away from that? And I think AI and project wisdom, from an Ansible perspective, is a great start on that.
Starting point is 00:35:57 Okay. And, well, I have a sort of forward-looking and a bit speculative question. So, again, as you probably know, Google's big event was last week, and there was a lot of buzz around that. And one of the interesting things that came out of that was its executives made a number of predictions, among which there was one specifically that caught my eye and I think may be relevant in this context.
Starting point is 00:36:29 So one of the predictions that came up was that automated cloud, around automating cloud infrastructure decisions. And the reason I think it's relevant is because it sounds like pretty much what you're also trying to achieve with this project here. And so given that this is a collaboration with IBM, which also happens to run its own cloud and, well, Red Hat on its part has OpenShift, I was wondering if this could possibly be a case of, well, dogfooding, let's say. So do you think that what you have done for Ansible can be generalized and taken to the next level
Starting point is 00:37:08 and actually used to automate those configuration decisions internally for IBM's cloud or for OpenShift? So, absolutely. What we see is, you know, most of the conversation around cloud automation is around day zero provisioning, provisioning of the initial provisioning of an infrastructure for someone to deploy an application and run an application on. But what we see, so there's been a lot of work around the automation of that.
Starting point is 00:37:36 Project Wisdom is going to make that much more tractable, if you will, across multiple clouds. But kind of the next phase that we see is in around the day two operations of these applications and the underlying environment to maintain them. So how do I detect and respond to alerts and events and things that are changing in my environment, whether that environment is in GCP, whether it's running on Azure, whether it's running on my data center,
Starting point is 00:38:02 whether it's running on an IBM mainframe somewhere in some colo location, whether it's running on Azure, whether it's running on my data center, whether it's running on an IBM mainframe somewhere in some co-location, whether it's running at the edge. All of these things are now happening in this complex ecosystem. And how do I respond to those? And how can I take, how can I make that more of a hands-free experience? So a lot of those day two operational activities ultimately are remediated or changed using Ansible Automation to do that. So I can see a future where the evolution of Project Wisdom may be where systems detect and create a description of a problem. And that problem is then automatically turned into a playbook for remediation and no one ever has to touch it. That you have this sort of hands freefree end-to-end, fully automated. People talk about AIOps for a long time. This is really what I'm thinking about with real AIOps, where decisions are being made and content and automation decisions
Starting point is 00:38:56 are being created in real time using AI. So I can see that. That's not today. I'm not trying to say that's our first ambition today, but I can see that down the road and evolving down the road. And back to one other thing you said earlier, which was a lot of the work that IBM has been doing around taking monolithic applications and converting those, taking a COBOL application and translating it into a Java, or what we see is taking those monolithic applications and turning those things into cloud native application formats, and then automatically deploying those onto our OpenShift container platform.
Starting point is 00:39:30 Those are lots of use cases that we see down there. That's why this isn't called Project Ansible Wisdom, this is called Project Wisdom. We intend to apply this capability from Wisdom into multiple areas, the first of which is creating Ansible playbooks that will likely evolve down the road. It is indeed a very interesting direction, a very interesting path to start walking. And to the best of my knowledge, I'm not aware of,
Starting point is 00:40:00 at least publicly, available information on other big vendors doing that. So it seems like you may be the first, well, you and IBM may be the first to start exploring that. And well, good luck with everything going forward. And thanks a lot for the conversation today. Thank you, George. It was great talking to you and appreciate your time. I hope you enjoyed the podcast. If you like my work, you can follow Linked Data Orchestration on Twitter, LinkedIn, and Facebook.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.