Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x04: How AI and ML are used in Network Management with Tom Hollingsworth of Gestalt IT

Episode Date: September 28, 2021

Local and wide-area networks can get complex very quickly, so it's no surprise that AI-powered network management is making a huge impact in the enterprise. In this episode, Tom Hollingsworth, who run...s Networking Field Day for Gestalt IT, joins Chris Grundemann and Stephen Foskett to discuss applications of AI in network monitoring and management. Solutions like Mist from Juniper Networks give network administrators the ability to ask questions get insight using the power of machine learning. This proactive observability stance allows network administrators to answer difficult questions rather than just keeping things running. AI truly has become a co-pilot for network engineers, helping transform their career once they embrace it. Another use of AI in networking is exemplified by Forward Networks, which can model and test networking concepts before they are pushed to a live environment. Another company, HPE's Aruba, is leveraging AI in edge computing while their Net Insight suggesting best practices. SD-WAN companies are also using AI to accelerate applications, and AI is finding applications in wireless networks. Finally we take on "AI washing" and the need to be skeptical when companies say their solutions use AI. Three Questions Is MLOps a lasting trend or just a step on the way for ML and DevOps becoming normal? Are there any jobs that will be completely eliminated by AI in the next five years? Tony Paikeday of NVIDIA asks, can AI ever teach us to be more human? Guests and Hosts Tom Hollingsworth, Event Lead for Networking Field Day. Follow Tom's thoughts at networkingnerd.net and Gestaltit.com. You can also connect with Tom on Twitter at @NetworkingNerd.  Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett.       Date: 9/28/2021 Tags: @GestaltIT, @TechFieldDay, @SFoskett, @ChrisGrundemann, @NetworkingNerd

Transcript
Discussion (0)
Starting point is 00:00:00 I'm Stephen Foskett. I'm Chris Gunderman. And this is the Utilizing AI podcast. Welcome to another episode of Utilizing AI, the podcast about enterprise applications for machine learning, deep learning, and other artificial intelligence topics. Chris, we've talked quite a lot on the podcast over the last two seasons about the ways that AI affects enterprise IT.
Starting point is 00:00:33 And one of the areas that it really ends up being seen is in network management. I think that's true. The term AI ops is getting thrown around quite a bit in a lot of different areas. But I think specifically in networking, there are both a lot of opportunities and some challenges, right? Networks are all built very uniquely. And so that provides a lot of variation that that it does iron out, especially looking for anomalies and doing some predictive analysis for planning and things like that. So, again, it's kind of a blessing and a curse there. Definitely an interesting topic. This seems one of those things that just keeps coming up again having experienced basically every networking field day event, almost everyone ever. So let's meet our guest, Tom. Hello. Thank you for having me on. I really appreciate it. So, Tom, you, as I said, run Networking Field Day for Gestalt IT, as well as Mobility Field Day and the previous Wireless Field Day. Over that time, you've seen quite a lot of AI and ML applications in your space. And so maybe you can start to just give us a little bit of an overview.
Starting point is 00:02:04 How is ML used in network management these days? It's a fascinating question because we've essentially been doing network management, network monitoring for a number of years with quite possibly the clunkiest system ever invented. Simple network management protocol, which is not simple and barely manages the network. SNMP is kind of what we had and it worked because we didn't have any other options. And as we've started becoming more advanced in the technology that we have and understanding that there's a lot of things going on under the surface and networking, we've surfaced a lot of this information and man, drinking from a fire
Starting point is 00:02:45 hose doesn't even begin to describe it. But as we've seen with a lot of other things in IT, the more information that you have available, the harder it is to make decisions about it. And that's when a lot of smart people in the network monitoring and management space said, well, why don't we leverage some of the newer technologies like machine learning to be able to surface insights out of the network? So if you've ever had that whole thing where you look at something and you go, wait, something doesn't look right here. That is effectively what we're using, at least in the beginning, network machine learning for is let's take a look at the stats on these interfaces and say, well, you know, we're starting to see really weird performance, like every third Thursday at 315 in the morning or something, which is a thing that a person could totally
Starting point is 00:03:28 look at if they had an infinite number of Excel spreadsheets, and an infinite amount of time. But with machine learning, it bubbles right up there to the surface. And so that was just kind of reducing the amount of noise that you saw in networking. But the good news is, is that once you have all that data, and you can create actionable things on it, that's where the AI part really starts to come into play. And it's really been pioneered by companies like Mist Systems, which is now a part of Juniper Networks, which has been presenting both at Networking Field Day and Mobility Field Day over the years. It's funny to see how their AI system has kind of pervaded through not just the wireless side of the house, but also through networking and security and
Starting point is 00:04:10 other things. And this is not AI washing, this is not Oh, we're going to stick an AI sticker on the switch. And now suddenly you have an AI enabled network. This is very much using those insights that you gather from that information to make good decisions about the network. I mean, if you ever heard Bob Friday talk, I mean, he'll tell you the story of how he was watching Watson play Jeopardy and how Watson was actually doing a really good job and how he wished that his network could do that. And so a lot of people focus on this idea that the network needs to be able to tell me what's going on.
Starting point is 00:04:46 But I think that it's different than that now. I think it's leveraging AI to allow me to ask questions of the network that I wouldn't normally be able to ask. If I wanted to see all of the switches in my network that have a latency of over 150 milliseconds, that's a real easy query, right? That's basically a search. But I have to know what latency is, why it's bad, what the thresholds are. And maybe 150 milliseconds is like the worst case scenario. What if it's something that's only 80 milliseconds that's
Starting point is 00:05:16 causing an impact on a new application that I've installed? Would I even know that? And so AI is able to kind of infer that when I say things like, why is this application running slow? It has to know what the metrics are for the application. It has to understand what slow really means. And it has to be able to show me the data to go, well, here's the problem, or here's a couple of places where it could be.
Starting point is 00:05:41 And I think that this one's more likely than the other one. And I think that that's where a lot of the development is coming right now is in this ability for us to ask questions of the network and get answers without having to know the exact perfect query to find the information that we need. That's a really interesting take on it. I think that kind of pull model versus a push model.
Starting point is 00:06:02 I mean, I think obviously we'd have to have both. I think there are times where a network management system needs to be able to reach out and slap us and say, hey, there's something going on, pay attention. But I really like the idea of being able to kind of query and go in natural language and maybe from more of a less expert view, right? So you've got maybe even frontline network personnel, frontline help desk personnel who can, instead of having to escalate three times to get an answer to something to
Starting point is 00:06:28 somebody who's on the phone right now, they can literally just turn around and ask the system, hey, you know, is there anything going on with this customer's connection and get an answer back that makes sense? That's super interesting. And I think it kind of boils down to me, this idea that what we're really looking for from, especially on the network monitoring side, is insight, right? It's not, you know, the data is almost useless, what we need is insight. And that insight can come from the machine, seeing that something's going wrong, or something's going to go wrong, and reaching out and telling us, but it also can come from, you know, the right questions being asked, and answered in a really interesting way. And that kind of,
Starting point is 00:07:05 you know, leads to this idea that's becoming more and more popular of just observability versus monitoring, right? Where monitoring is kind of one piece of this, and what we really want is observability. And observability is really about unknown unknowns, right? So monitoring is, okay, I know that these things can break, so we're going to watch them so that we know if they break. So it's, it's known unknowns, right? We know that this is a problem or could be a problem. So we're going to watch that and then we'll find out when it happens. And observability, I think is taking that to the next level, which is unknown unknowns, which, you know, I think, you know, ML can be at the center of there's a, there's a lot of other pieces and parts that have to come together to make that work. But is that, is that kind of
Starting point is 00:07:47 what you're talking about, Tom, is this move towards, you know, insight and more of a pull model and, and, you know, the trend towards observability with, with ML being the engine behind that? Yeah, it is because we have spent so many years, decades at this point, really stuck in this model of monitoring monitoring telling us something is above a threshold. And like, we have to configure those like, you know, one of the things that we talk about a lot is, you know, making sure that you don't have alert fatigue. I remember in my old job at a bar, like I would walk by the board every day, and there were always red lights. And you'd always be like, Well, why is that light red? We don't know. Well, how long has it been red? We don't know. So you just get to the point where it doesn't matter anymore. But with the ability to do more of a proactive observability stance to go out and to say things like a good example is a few years ago, SolarWinds developed a tool that would allow you to do not only a path trace of an application from your location to wherever the
Starting point is 00:08:46 it's hosted, but it would measure the latency along the path. Because, you know, no one ever asks you why is the network down like that, that that's, you know, on or off, it's a binary state. But we get why is the network slow a lot. And when in the old days, when everything was hosted in a private data center, and it was, you know, an on premises solution, there were a lot of things we could do to fix that, right? Well, now with digital transformation in the cloud,
Starting point is 00:09:13 there are problems that we can recognize and not fix. So if we had a system that could kind of maybe do you know, synthetic traffic or some kind of monitoring solution that is not me querying to say, tell me when something is wrong. But going out and saying, Hey, I noticed that the latency on this link is a little higher than it normally should be for this time of day, maybe we need to consider switching to a different route. Or you know, I went ahead and, you know,
Starting point is 00:09:41 dynamically adjusted it to go to a different data point or something like that. And that's helpful, because honestly, it kind of answers that question before it happens so that there, there never is an unknown unknown. Or if there is, like if you've deployed a new application that's more sensitive to certain conditions, that that whole system works before the CEO comes down and starts beating you over the head with a wooden spoon, wondering why everything just went to hell. And it's important to understand that because IT people have a reputation for being firefighters, and we're really, really good at it.
Starting point is 00:10:17 But we really need to be more like fire marshals. We need to be preventing the fires before they happen. And in order to do that, you have to know what's combustible. What should you store next to each other or not? How can you keep ignition sources away? And that's more of that proactive understanding where the problems could be so that you know where to look when the problems start happening. This reminds me of one of the episodes from last season when we were talking about how
Starting point is 00:10:42 AI doesn't just replace workers. It actually elevates them and it lets them do things that they might not have been able to do before. And it's actually rather exciting to think about this. You know, we've, of course, talked about AI being a co-pilot forever. But, you know, the idea that an AI system can allow network administrators to, you know, stop being just network nerds and start being actually part of a proactive solution for IT and for the business is pretty exciting for a lot of them, I know. Because like you said, if all you're doing is crawling around, you know, trying to find a broken cable or trying to find a bad switch, that's not a very glamorous and
Starting point is 00:11:27 rewarding job. But if you are able to sit at the table and say, look, I've spotted this pattern or this trend, or I see that we can improve the whole solution in this way, it really sets them on a different footing. Are network administrators seeing the potential of that or are they seeing this Tom as a threat to their jobs? I think at first it very much was a threat especially for people who are kind of that junior level that are maybe just starting to graduate into a slightly higher grade because a lot of the problems that we have those people work on are very much those investigative go find out why the switch port's broken. I'm going to need you to tweak the configuration on this trunk port kind of things. And they're important. Don't get me wrong.
Starting point is 00:12:11 That's how we learn. But there's also a lot of potential for bad things to happen there. I've taken down more than my fair share of networks because I was like, oh, this won't be a problem. And it was. But I think the smart people are starting to understand when I say smart, I mean, not, you know, super intelligent, genius, mental level people, I'm saying the people who see the vision for what this is going to do, they're not afraid that they're not going to have a job, they're happy that they're not going to have a middling task ridden day of checking switch ports and understanding why the interface error counters are going up. It's, you know, it's it's this, it's the enterprise computer, you can ask it questions,
Starting point is 00:12:51 and it will tell you what's going on. But the computer can't replace the chief engineer, you still need somebody that can say, Okay, what if I planned it better like this? Or what if we changed this parameter or something like that? Now the system can tell you, well, if you change that it will reduce latency over here by you know, 30 milliseconds, but you also have to worry about this route recalculating or something like that. And that the person is still going to be necessary for the system to be able to make a suggestion to make that change and you to go, Hmm, I like that. Let's go ahead and do it. Or I don't think we need to do that right now, but let's look at doing it in the next change window
Starting point is 00:13:28 because you still have to have that judgment call. And that's one of the things that a lot of even senior level engineers that look at AI are saying is, I want the AI to make suggestions, but I don't want the AI to just go off and do things because I still want to have that positive control aspect of saying,
Starting point is 00:13:44 I know that this won't have an impact, but I still want to be the one to push the button. And we'll still need those people. So we're not getting rid of network engineers. What we're getting rid of is menial network engineering jobs that quite honestly, probably should have been automated out of the system a long time ago. Possibly. And I do think that the point you make about, you know, the human in the loop being something that we want and need for a long time is definitely true. I mean, I've just recently talked to a friend of mine who manages a DevOps team for a software development shop. And, you know, even in this world where they are very much on the continuous improvement,
Starting point is 00:14:21 continuous delivery, like CICD pipelines, and they're using all the tools and they're doing all the things, there is a step before code gets pushed to production where someone looks. And so a developer could push a button and all these things happen automatically. And then there's still someone there who's like, you know, an actual check and no code goes to production without somebody laying eyes on it. I don't want to dig into this too far. I do wonder if that's a safety blanket for humans, and then we'll get over that at some point. But I think your point's very well made that the one thing that no artificial intelligence system does today is abductive inference,
Starting point is 00:14:56 right? So machine learning can do deductive inference, can do inductive inference. And the thing that's missing is this abductive inference which um for the uninitiated is basically what we call common sense um the idea to be able to walk outside and see that the ground is wet but there's no clouds in the sky and there is a water tanker down the street and and maybe it's leaking or has has sprayed down the streets right and and you can kind of figure that out through these kind of unrelated facts um and in your previous knowledge but not in a really structured way. So this kind of lateral thinking and leaps that human beings make that we just consider common sense is actually really
Starting point is 00:15:31 complicated and hard for machines to replicate. And so for that reason, I think we'll definitely need people in the loop for a long time. And I do agree that in general, we will be colleagues with the machines more than we will be subservient to them or replaced by them. The one thing I do agree that in general, we will be colleagues with the machines more than we will be subservient to them or replaced by them. The one thing I do worry about, though, is that career path, right? And so I think you really astutely pointed out that a lot of the things that machine learning can remove is these menial tasks that we used to give to the intern or the junior, you know, network engineer, the network technician. And so, but that's how I learned, right? I think that's how a lot of us learned was, you know, kind of starting at a help desk or starting, you know, monkeying
Starting point is 00:16:17 around with configurations or, you know, plugging stuff in together. And I just worry, and this is something that I've worried about, you know, from just a general automation perspective, but now definitely it's amplified by, you know, putting artificial intelligence and machine learning into that automation framework, is if we eliminate all those menial tasks, then how do people learn networking? And even if we can learn networking, are they learning a different thing than what I consider networking, right, through these levels of abstraction? And is that a risk or is that just how the world works? I would argue that you would run into a situation like you do with pilots, is the best way to train a pilot to send them up in a plane and then, you know, cut one of the engines? Or is the best way to train a pilot to train them in a controlled environment, give them the possibility of all the things possibly going wrong so that when they do get into that situation, they're a little bit more trained. I mean, we've been dealing with this for a number of years in networking. It's like, okay, we're going to train
Starting point is 00:17:15 you, you know, in the field, we want you to go out and start looking at these situations. I mean, there's a reason why Juniper has a setting on their devices, that's the default that when you make a change to the system, it's not live, it is evaluated for problems before it's committed to the code. Because too many times you were, you know, for lack of better term, changing the tire on a car while it was driving down the road. If everything goes perfectly, you're not going to have any problems. But if anything goes wrong, you're basically going to lock yourself out of the device, you're going to create a cascade failure or something like that. And I think that what we're going to get away from is people kind of learning by beating their head against a wall,
Starting point is 00:17:59 and getting into more structured scenarios where things can go wrong so that when they're out of that area, they're more comfortable recognizing those situations. The very first thing I ever troubleshot on a network in my professional career was a bridging loop. And it took me five and a half hours to figure out what was wrong with it. Even with other people, they're kind of walking me through the process. And when we finally realized that someone had forgotten and enable spanning tree on these devices, I was able to use that scenario going forward with people to say, you know, something doesn't look right here. But I want you to look at this specific scenario and tell me what you think, so that they never had to experience a bridging loop that
Starting point is 00:18:40 took down, you know, a 300 node network, they could do it maybe in a network that was in a lab or in another situation. And in this, it's the school of hard knocks thing is the best way to train a soldier to shoot at them, or is the best way to train a soldier to put them in realistic scenarios with a degree of safety, so that they understand the scenario as it goes forward, so that when they see it in real life they're much better trained i don't have an answer necessarily but i don't think that ai is going to remove that ability to figure these things out i think that ai is actually going to augment it because it's going
Starting point is 00:19:17 to not only surface the situations that will crop up but it will be able to produce them in an order that is more likely to happen. I mean, I spent years studying for my CCIE lab exam. And the corner cases that you get in a CCIE lab exam are beyond incredible, because they're trying to teach you the rigors of the protocol. And then within a week of me getting my lab done, the first thing I did was go out and troubleshoot a very simple DNS problem. And it's like, well, man, I see this more often than I see that really weird thing that I had to spend a month figuring out how to fix. I would rather the people in my organization, with the help of machine learning and AI know these are the top five things that are going to happen on the network. And this is the top five things that have happened over the last
Starting point is 00:20:00 two years. Be ready for these things and be able to fix them quickly and we're going to have a much smoother network and when that random thing does crop up three years from now yeah it could happen but you've had enough practice to that point so that you're able to devote your knowledge and efforts to fixing it as opposed to getting bogged down with other little things yeah i never really thought about that as far as the, you know, training folks in a flight simulator for networks. And I guess that's obviously another area where artificial intelligence and machine learning could help by, you know, creating realistic scenarios for a person to deal with, right? You know, based on maybe the network they're even training to work
Starting point is 00:20:44 on, right? So if you're going to work at, you know, I don't know, GE, right? You know, based on maybe the network they're even training to work on, right? So if you're going to work at, you know, I don't know, GE, right? They probably got a huge network and your network admin coming in there, it could actually maybe even replay incidents that had happened on that network in a simulator. Have you seen anything like that out there, Tom? Or is that something that you've just, you know, I mean, it's a great idea. I don't know if there's actually products. I've seen some things, right? I mean, you've talked about Juniper a couple of times. They had, they purchased a product that was called Wandel that used to be able to do this with MPLS networks. You could
Starting point is 00:21:15 actually go in and say, okay, well, what if this link cut? And it would, you know, it would follow the protocol rules and figure out and say, okay, this would happen. And what if, you know, network capacity increased by this much in this city and it would kind of redistribute and show you what happened. But I've never seen anything like that used for training. So there is a tool out there, a platform that's actually kind of built around this idea of Forward Networks, which is a company that actually launched at Networking Field Day, was built by some very smart people to essentially create a virtual model of your network and allow you to do things to the virtual model to examine the results. So for example, what if we changed these interface links
Starting point is 00:21:51 to this, or I want to find out which of my interface links are running at slower speeds or things like that. And the idea originally behind it was, you know, do this in a model, not in a live environment, so that if the changes break or something like that, then you're not going to blow anything up. But as they started kind of leveraging this power, that's when they realized, what if we could query against it to get more information? And I think that it's only logical that the next step is to include a significant amount of machine learning and AI to proactively give you answers based on that model. Because, you know, we build labs and labs kind of represent reality, but they don't because the cobbler's kids don't have any shoes. I'm not going to get a brand spanking new switch to throw in a
Starting point is 00:22:40 lab that's never really going to do any production equipment, I'm going to have to make do with what I've got. Whereas if it's a virtual model that behaves 98% the same way the real switch would, then I can have effectively unlimited devices to say, well, what if I created a load balancing pair here? What happens if traffic fails over from this area to that area? I want to model this. And then I want the AI to learn from that modeling. And I want it to come back and go, okay, well, we suggest that you add these things, or you change these routes, or something like that. I want there to be more of that reasoning behind it to say, these are the suggestions that would make your network run faster. And I think coupled with a tool like the one that Forward Networks has, it would give you a huge capability to kind of
Starting point is 00:23:23 proactively suggest things. Because the worst thing in the world for us is to get to a change window when we know we have the ability to make changes to the network and to not have anything to do. Or worse yet, we have something in the pipeline, but we haven't tested it enough to make sure that it's not going to crater everything. What if we get to a situation where, because networking is still very much built around this idea of change windows, because it's a steady state machine. It's not quite to the level of DevOps where we can just push code whenever we want. What if we have the ability to say with confidence, like I'm 95% confident that this change isn't going to blow anything up. Let's go ahead and implement it. And that to me is a hugely
Starting point is 00:23:56 powerful thing that will allow us to be more proactive in the maintenance that we do on networks. Because right now it's, oh, well, that firewall's down. Well, we can't replace it until the change windows back up because we're going to need to get the pairs back and running. You know, we've already trialed this a hundred times in a simulation and we know that it's going to work. So we're ready to go and the team's ready to get it done. Tom, I heard you mention Juniper and Forward Networks here. Are there any other companies that you can think of just off the top of your head that are doing really clever things with AI and networking? So they're starting to pick up a lot. Aruba Networks, which are sorry, Aruba, which is part of Hewlett Packard Enterprise
Starting point is 00:24:33 is really starting to jump on this AI train as well. In fact, if you look at some of the stuff that they've presented recently, they are heavily leveraging AI for their edge computing initiatives. With the acquisition of Silver Peak, they have a very solid enterprise SD-WAN play now. They obviously have been collecting a lot of statistics from the wireless side of the house. In fact, one of their acquisitions from a couple of years ago, Rasa Networks, is fascinating because it takes statistics from across their customer base and puts them into a database and then gives you suggested best practices based on what other people are doing. And you've probably always
Starting point is 00:25:12 wondered to yourself, well, how can I know that this is going to work? Because maybe my corner case is different than everybody else's corner case. But I believe they rebranded it as Net Insights. Net Insights is like, yeah, you're actually not that uncommon. 20% of the people out there do this thing and they have it set to this, which is like one step up from the middle and it works great for them. And so having that capability and, you know, extending it forward. So I mentioned Silver Peak, they're an SD-WAN company, but there are a lot of other SD-WAN companies out there that are kind of on the forefront of the AI trend. And part of it came from the fact that they were very focused on application acceleration. And then a lot of them. All of these companies have presented at Field Day in the past, and we've seen that trend from kind of basic connectivity tying branch offices together
Starting point is 00:26:11 to being more of an AI-driven application acceleration engine as things move to the cloud or became less important about making sure that the office in Poughkeepsie could talk to the office in Bakersfield. Maybe not throwing stones at anybody, but I imagine also that there's a bit of AI washing going on in the networking space where people are just saying this is AI when it's clearly not. I assume you've seen that too. Yes, unfortunately I have. And whoever's selling these AI stickers to people, we need to find them and stamp them out because it's quickly becoming difficult to separate the wheat from the chaff. Not because you don't understand it, because you can take one look at it and go, yeah,
Starting point is 00:26:55 this is just really advanced linear regression. This is not actual AI or ML. A lot of it comes down to being able to talk to a company and understand if they actually have data scientists and people working on their staff or not. And I will admit, I walked into a presentation from Extreme Networks a couple of years ago and they had ML on their agenda. And I was like, yeah, I haven't heard really a whole lot about them. I don't know how it's going to work. And their presenter popped up a lot of math with a lot of Greek letters in it, which obviously means that he knows what he's talking about. And the entire presentation, which you can find on techfieldday.com is very ML heavy, very AI focused. They get it. They understand it. So if anybody is
Starting point is 00:27:41 trying to sell you a bill of goods to say, oh, well, we do AI, ask to see proof. Don't just accept that they do a thing that looks like AI. They should be able to tell you things like algorithms. They should be able to tell you a lot of the data points that they collect. They should be able to show you that they did the research. They did the homework as opposed to, well, we put AI on it because AI is what people want to buy. Yeah, absolutely. I think that honestly, the telltale is if they've got any kind of machine learning infrastructure at all. And if they don't, well, then it's probably not an AI system. Of
Starting point is 00:28:18 course, we know that there is AI that's not ML. But if a company isn't using ML, then I'm really start wondering exactly what kind of AI is this anyway. That being said, of course, I'd love to see more expert system software out there, but we'll see. So Tom, we've now reached the part of the podcast where we ask you three questions. This tradition started in season two, and we're now carrying it through to season three with a twist. Note to our listeners, this guest has not been prepared for these questions ahead of time, so we're going to get their off-the-cuff answers right now. In this season, we're changing things up a bit. I'm going to ask a question, Chris is going to
Starting point is 00:28:55 ask a question, and the third question actually comes from a previous guest here on the podcast, so this will be a lot of fun, I hope. Tom, let's kick off with this. You did mention MLOps. Do you think that MLOps is a lasting trend or just a step on the way for ML and DevOps to become just the normal way of operating things and networking? I think it's a stepping stone because I don't think that there's enough in ml ops to actually make a career path out of it. I think it's a function of a larger platform. You know, we've heard this time and time again, this is a solution in search of a problem. I think ml ops is that solution. But we need to define the problem space a little bit better and build more around it. So I think that
Starting point is 00:29:43 that people who are kind of sinking their farm into ML Ops now need to be looking for more farmland because they're going to run out of steam really fast. So Tom, we talked today about the idea that ML and AI in the networking space is going to augment people's jobs and may not replace them. But are there any jobs that will be completely eliminated by AI in the next five years? I think that the jobs that you're going to see that are going to be eliminated are people who have repetitive menial tasks as their entire job description. You know, think back to office space. My job is to take the plans from the customer and give them to the engineers because I have people skills. Well, that's an email. And so people whose job it is, is to make
Starting point is 00:30:31 sure that the lights are on and that, you know, all the interface counters are where they're supposed to be. They're going to go away. And right or wrong, that probably kind of like the MLOps question, that's not something that you should have been able to build a job around. That should have been a stepping stone to something greater. So I think what you'll see is over the next five years, those people will eventually retire or move to different areas and their job just won't be backfilled. It will be replaced with a shell script, basically. And now, as promised, we're going to use a question from a previous guest. And maybe, Tom, you can give us a question as well for a future guest.
Starting point is 00:31:07 The following question is brought to us by Tony Paikaday, Senior Director of AI Systems at NVIDIA. Tony, take it away. Hi, I'm Tony Paikaday, Senior Director of AI Systems at NVIDIA. And this is my question. Can AI ever teach us how to be more human? That's a really good question, Tony. And I would counter with a slightly philosophical statement. What does it mean to be human? Now, we learn about what it means to have morality and ethics and laws and empathy as we live our lives from the people that we interact with. And for some people, they learn one more than the other. In some cases, they don't
Starting point is 00:31:55 learn enough of it. And I would say that an AI is the storehouse of all knowledge that we have in the world. But does it know how to give us what we need at the right time, I'm going to teach this person to be more empathetic because of a situation that's come up. I think if we can train an AI to do that, proactively, that we can learn to be more well rounded human beings. But obviously, that's a that's a task for a long time in the future, when we can give an AI the capability to learn these esoteric concepts, that quite honestly, can't be bounded by a machine right now, that's going to take a next order level of thinking. So if you want to go all the way back to the Hitchhiker's Guide to the Galaxy, that's not asking for the answer to life, the universe and everything.
Starting point is 00:32:48 That's knowing the question. And we're not there yet. Well, thank you very much, Tony and Tom. That was a really interesting question and interesting answer. We're looking forward to hearing what your question is going to be for our future guests, Tom. And if our listeners want to be part of this, please let us know. You can send an email to host at utilizing-ai.com and we'll record your question for a future guest. Before we close, Tom, where can we connect with you and learn more and follow your thoughts on AI and other enterprise topics? It's a great place to ask Stephen, because I have a lot of areas that I cover things in. Of course, my blog is networkingnerd.net. My Twitter handle is networkingnerd. And if you want to follow some of the other thoughts that I have, gestaltit.com is
Starting point is 00:33:37 a great place to see some of the latest technology briefings that I've taken and some of my insights into the way that things are operating. If you check any of those places, you'll probably see what I'm up to and hear about some of the cool things that I'm doing. And that will give you some insight into kind of where my thinking is right now. How about you, Chris? What's new with you? Yeah, everything new can be found at chrisgrundeman.com. A couple of things to maybe look at through GigaOM, I published a report on network observability recently that I think covers some of the topics that we've talked
Starting point is 00:34:10 about here. And we're working on a new one on net DevOps, which would be relevant as well. So maybe check out gigaom.com for those reports too. Excellent. Thanks a lot. And as for me, I'm working on a lot of Tech Field Day events, of course, but we've actually brought a lot of this stuff to the Gestalt IT Rundown, which is published every Wednesday. So just go to gestaltit.com or look in your podcast app and you'll find the Wednesday news rundown featuring me and Tom Hollingsworth, surprisingly enough, talking about the week's news. So thank you everyone for joining us for the Utilizing AI podcast. If you enjoyed this discussion, it really does help if you subscribe and give the show a review in iTunes. And please do, of course, share the show with your friends. This podcast is brought to you by gestaltit.com, your home for IT coverage from across the enterprise. For show notes and more episodes, go to utilizing-ai.com or follow us on Twitter at utilizing underscore AI. Thanks for joining us and we'll see you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.