Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 09x04: Bringing Agentic AI to Production with Articul8

Episode Date: October 20, 2025

Most generative AI work has been fairly general purpose to date, but it is far more effective to develop expert models focused on specific industries. This episode of Utilizing Tech features Arun Subr...amaniyan, founder and CEO of Articul8 AI, in conversation with Guy Currier and Stephen Foskett. According to Arun, an agent is a model with a set of tools plus data that has agency to be called and interact with other agents. If these agents are domain-specific they can perform tasks more effectively than general purpose agents at certain points in this chain. Agentic AI is able to accomplish tasks previously thought impossible, and these systems keep improving. But people remain responsible for using and managing these systems. Costs can rise significantly if AI is used improper, but it is possible to deploy it profitably. Companies that can combine domain expertise with a novel AI-powered application are breaking free from the pack.Guest: Arun Subramaniyan, CEO and Founder of Articul8 AIHosts: ⁠⁠⁠⁠⁠⁠⁠Stephen Foskett⁠⁠⁠⁠⁠⁠⁠, President of the Tech Field Day Business Unit and Organizer of the ⁠⁠⁠⁠⁠⁠⁠Tech Field Day Event Series⁠⁠⁠⁠⁠⁠⁠⁠⁠Frederic Van Haren⁠⁠⁠, Founder and CTO of HighFens, Inc. ⁠⁠⁠Guy Currier⁠⁠⁠, Chief Analyst at Visible Impact, The Futurum Group.For more episodes of Utilizing Tech, head to ⁠⁠⁠⁠⁠⁠⁠the dedicated website⁠⁠⁠⁠⁠⁠⁠ and follow the show ⁠⁠⁠⁠⁠⁠⁠on X/Twitter⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠on Bluesky⁠⁠⁠⁠⁠⁠⁠, and ⁠⁠⁠⁠⁠⁠⁠on Mastodon⁠⁠⁠⁠⁠⁠⁠.

Transcript
Discussion (0)
Starting point is 00:00:00 Most generative AI work has been fairly general purpose to date, but it's far more effective to develop expert models focused on specific industries. This episode of Utilizing Tech features Arun-Subramanian, founder and CEO of Articulate in conversation with Guy Courier and myself as we talk about specific models and building agentic AI applications in production. Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day, part of the Futurum Group. This season focuses on practical applications of agentic AI and other related innovations in artificial intelligence. I'm your host, Stephen Foskitt, organizer of the Tech Field Day event series, including AI Field Day, which is next week.
Starting point is 00:00:44 Joining me this week as my co-host is Mr. Guy Currier. Guy, welcome to the show. Thanks, Stephen. It's great to be back. So, Guy, it's been fun having you here. You bring a different perspective. Frederick and I have done quite a few seasons and episodes. This episode, we're going to be really zooming in on something that's kind of near and dear to, I think, the topic of the episode, which is, or the season, which is practical applications for specific industry verticals and basically making sure that these things actually go and get into production. Yeah, into production. So that is where the proverbial rubber hits the road in application work. And I think AI has, AI development, AI capabilities have accelerated so rapidly.
Starting point is 00:01:39 In previous, you know, cycles, transformative cycles, you think, you know, we sort of measured in years as the latest one, I guess, being cloud is ramped up. And, you know, people got used to it, started doing useful things. Really, in AI, it's months because arguably agentic AI is a sub-transformation within the overall transformation. The point being that there is 90% experimentation and 10% production, I think. I'm probably even overstating it. It might be 99 and 1%. And some of that is due to all the experimentation and innovation going on in enterprises and industries all over. What can you use this for?
Starting point is 00:02:23 Part of it is the sub-transformation within the transformation of AI. All of that represents change, new paradigms in processing, data ingest, data storage, and distribution, orchestration, all that other sort of stuff. The classic resource elements, should have mentioned networking, security, everything else, the classic resource needs for applications going back since forever. but now whole new patterns in a rapidly changing technology landscape that is AI, and in this case, Agentic AI. So that is a major issue in taking something to production. And what is taking it into production means? It means you need a certain level of reliability, security, availability,
Starting point is 00:03:13 regional coverage, all of which are really important in industry, not to mention the enterprise. So I love having this topic around agentic AI. which is not the art of the possible stuff anymore as much as let's get something out of it and figure out what's getting in the way of it and do something about that. Yeah, and one of the things that I think is also holding it back
Starting point is 00:03:38 is that a lot of the AI solutions out there are really generic. It's sort of like, you know, we made this foundational model, you know, it's a chat bot or it's a, you know, speech recognition or whatever it is, and it's useful for anything and everything. But that's not necessarily, the right approach. And that came out of a conversation that I had with our guest today, Arun Subramanian from Articulate. Arun and I were speaking about what they're doing, and he was quick to point out that sometimes the best way to bring something to market is to develop a focused
Starting point is 00:04:11 solution for that specific industry. So Arun, welcome to the show. Stephen, Gary, thank you so much for having me. Fantastic to be here. As you mentioned, I'm Arun Subramarian, founder and CEO of Articulate. We are building a domain-specific Gen.A. Platform focused on industries that typically have a very high barrier to entry, as you said, where genetic solutions, first of all, don't work, and second, can be pretty dangerous to be implemented there. I give you several examples of those as well. But it's not just about solutions, but also having platforms that cut across use cases that are specific to an industry, but then they're widely applicable across the industry.
Starting point is 00:04:52 So you're not developing sort of hyper-focused solutions so much as industry-specific solutions. And as you said, I mean, I guess why bother training a, I don't know, a liquid flow analysis AI, how to make haikus, or how to cook, stake, you know, I guess it's, you know, maybe it makes more sense to have something that's a lot more specific. Is that what you mean? Yes. So like think of, say, fields like, say, manufacturing or aerospace or energy or even
Starting point is 00:05:30 places like, say, cybersecurity. You may be able to understand some of those concepts if you know language. Like, of course, at the highest level you can understand that. But then in order to actually go implement anything there, you need experts. And anytime you need human experts, you should also think about the notion of would you hire somebody whose only knowledge is what they learn from the Internet? Or do they actually have some hands-on training? Have they gone to school for doing something? Have they actually gotten battle tested in that particular feat?
Starting point is 00:06:04 The answer is pretty obvious. Most places you would want somebody who has the context who's an expert in the field. Us going to these general purpose models, no matter how big they are, or how much how proficient they seem to be in a lot of different aspects is like going to somebody who's only learned from the internet and is able
Starting point is 00:06:25 to only understand what is already on the internet. So that's the comparison bar that probably nobody would want to go for a specialist who considers themselves a specialist from internet knowledge only. What's really interesting
Starting point is 00:06:41 when you apply that to agentic AI is that agendic AI, I mean, it can be based on just a single, the operations of a single model inference, but as a general rule, the idea is for agentic AI to interact with numerous resources and other AI and even grow in that regard and create ad hoc connections, ad hoc connections, data, ad hoc connections, to other agents. And so that kind of specificity is not necessary, let's say, for every element of a workflow.
Starting point is 00:07:24 Or that specificity can change over the course of a workflow. I'm just going to focus on workflows because I think that's a really general purpose concept across the genetic AI. So I think that we talk a lot about tuning or context, whether for foundational models or for, or for more tuned or more specific models. Inogenic AI, the problems associated with AI that's not appropriately focused, I think would just multiply.
Starting point is 00:07:56 So I think that makes the application of the technique you're talking about are even more important. Absolutely. So let me perhaps define what I mean by an agent just so that we can level it because there are slightly different definitions of that. Really, for me, an agent is a combination of a model. I'm making it even generic. It's a model not necessarily in LLM,
Starting point is 00:08:21 combined with a set of tools and with a set of optional data sets. Let's put it that way. Whether the data is something that you go out because of the internet to go do a web search or it might be your own datasets in your enterprise. And the tools are what are important because those not only specify what an LLM or a model cannot do,
Starting point is 00:08:44 but you don't want the model to do even if it doesn't. An example would be something as simple as find the average of a column of data. Like some LLMs can do it, some LLMs can do it better than others, but why? Like it's things like that that you would rather do with tools that are battle tested, that you know will get you the right. answer every single time. And then combining that with a model actually gets you to do something. And the reason why it's an agent is it actually has agency, meaning it can call things, it can decide, and most importantly, act on your behalf. That's at least my simplistic
Starting point is 00:09:23 definition of what an agent is. And then you make the agents work with each other or work with other tools around them, really what makes it into an agentic system. And if you think about it, That's pretty much how we solve all our problems. Every system is a human-driven system that has multiple people who have agency. Now you're augmenting that with systems that also have agency. Yeah, and I think that, well, I agree with your definition. I think that it's important that people don't, I guess, too tightly constrain what they think this is. But as you say, I think the important aspect is the fact that,
Starting point is 00:10:06 they have, in a way, specific uses for each agent in the chain and that it's a chain of agents. And so they know how to look up other agents that can do things for them and they know what the capabilities of those are and how to call them and how to specify things to them and that those agents then can take the output and kind of go to the next one in the chain. That's what you're saying? that's exactly what I'm saying and what makes these things domain specific are a combination
Starting point is 00:10:39 right so you could for example make the model a domain specific model meaning a model that understands energy really well and likes energy production and transmission can become an energy specific agent which has access to tools that are both general as well
Starting point is 00:10:55 as specific to that particular domain for example if you have a tool that predicts power demand or how do you optimize the grid? And you mix that with a model that is domain-specific. That becomes a highly domain-specific agent. Look at us. We're talking again about sort of these concepts
Starting point is 00:11:18 because, I mean, that's the nature of this market right now. I wonder, like, if we focus on bringing agents, agenic AI into production, though, can you maybe around from your perspective put into light the perspective that I provided earlier, which is how critical it is to, or what kind of challenges, I should say, around the complexity of these systems, what challenges there are to putting agentic AI into production? I can see integration and connectivity being part of. I can see data availability being part of it.
Starting point is 00:12:05 I can see a lot of things, but what are you seeing? I mean, the first step that you need to beat is security. So that's really why people get to agentic systems because if a model could give you an answer, you would have to stop there. That itself is reasonably complex because you need to deploy GPUs. Your infrastructure becomes a little bit more complex. Then you have to combine models with agents, like models with tools, so that becomes an agent, so you have to manage tools.
Starting point is 00:12:30 You have to manage your models. Now you have a group of agents working together. So you have to manage a group of things that themselves need management. So from a compute system perspective, it's a reasonably complicated environment. Like there's a few systems out there that give you the ability to manage agents, but they're all rudimentary today. Nothing at enterprise scale, nothing that meets your enterprise quality standards. But even if you did, the real challenges that you face are things of,
Starting point is 00:13:00 scale, meaning if just imagine a company with 10,000 people. If you let agentic systems lose, you're talking about hundreds of thousands to millions of instances of agents running daily. That's not a small amount of scale. And some of it might be doing silly things like managing email systems. Some of them might be actually doing production quality systems and things that are critical. So you will have to be able to manage that. But even if you did, say, how do you manage the fact that this is reproducible? How do you manage the fact that this needs to be auditable? How do you manage the fact that you have security at the data level?
Starting point is 00:13:42 You have security at an application level. But here, the question is, just like how, I'll make a tongue-in-cheek statement, just like when there is no spoon, here there is no apps. Like the app becomes available on the fly. It instantiates itself when you need it. for whatever you need to do. If you want to keep it along, you keep it along as an app.
Starting point is 00:14:05 Otherwise, it disappears. So, like, when you think about security, you had data security, you have application security, you had enterprise security. Now, what if your applications themselves are refermetal? So you have to think about security very differently.
Starting point is 00:14:20 So the same way when people went from on-prem to cloud, there was a massive transition of security protocols and then shared security practices and all of that stuff. the security practice around agentic systems also have to evolve. So I assume that you have a suggestion, a proposal, on how to handle this problem. Let's cut right into it. How do you think these issues should be tackled?
Starting point is 00:14:48 So first and foremost, I'm an optimist, right? So all of these problems tell me that there are so many things to go invent and so many things to go solve. That's one. But all of these things start because there is a really, use case and there is a real value story at the end of it. Enterprises can legitimately do things today that they could not even dream about six months ago. So that is the value that we need to cross. Otherwise, there's no point in putting all these additional costs. So that's number one. But the second thing that we come across, across every enterprise is the traditional question
Starting point is 00:15:21 of build versus buy. In my opinion, we no longer live in the world of build versus buy. We live in a world of build faster or build slower. There is no build versus buy because first and foremost, everybody, not just developers, every single person and every single company is now potentially possibly becoming a developer. Like you can ask something in natural language and it builds you an app. Now it may not be production quality, it might be flaky, but if the app lives only for two hours, do you really care? because you were able to build it,
Starting point is 00:15:58 it was useful for you when you moved on. And if you're living in that kind of world, you want to be able to enable everybody to build faster. But at the same time, you need to put guardrails, but you also need people to be able to build things that are reliable. And so it's the build slower versus build faster world that I'm much more interested in thinking about. And the build faster is really,
Starting point is 00:16:21 whether you partner with somebody like articulate or you partner with you go to an AWS or you, work with any of the other vendors, building with AI tools and AI vendors should be a no-brain. And from a cost perspective, I think that's easily a three to five to ten times cheaper exercise when you think about a total cost of ownership perspective and time to market perspective, right?
Starting point is 00:16:47 Because time today is money. And if you can get out there with your solutions, grounded in your own enterprises data, your own enterprises know-how, added with AI, that's really what is going to differentiate your business to somebody else's. And that's what we call hyper-personalization, and that's something that we're seeing again and again play out in many industries. And I'll give you one more example and stop answering the,
Starting point is 00:17:15 if you think about AI enablement across enterprises, today there might be a gradation. Some companies are ahead. Some companies are behind. They are just getting started. But AI for productivity, in my opinion, is going to become table stakes. Just like how email is table stakes today, having internet or an intranet, I mean, maybe I'll date myself, but when I went into the workforce, having an intranet used to be an advantage. Like no company today would consider that an advantage.
Starting point is 00:17:48 Same way, having access to AI tools are becoming remarkably similar. whether you are with Google or whether you're with Anthropic or you're with Open AI or Microsoft, your capabilities for productivity are more or less going to be the same. So if that is the case, the tide has risen. How do you stand out? You have to hyper-personalize. And to do that is really where you're building faster with the capabilities and partners that can bring to bear. I would say solutions and platforms that are domain-specific in your own domain.
Starting point is 00:18:23 I'll give you some examples as well, but I'll pause to see if you have any questions or comment. It sounds like that puts more of the production of readiness burden on the platform, though, because if you're saying the application is ephemeral, that means the stack is ephemeral, in a sense, at least the upper part of the stack. So how do you ensure the question, that comes to mind then is how you ensure that the production enables the platform enables
Starting point is 00:19:01 production readiness for an app that lasts two hours, two days, or two years that scales or doesn't scale that's very individualistic or more, let's say more broadly individualistic, you know, good for a department and organization and industry. That puts more pressure on the platform because the need for production readiness exists however long. Actually, it also lowers the barriers for a requirement or lowers the, not barriers, lowers the requirement level, like you said, depending on the nature of the app. That's a pretty big change. It is a big change, right?
Starting point is 00:19:40 But then think of all of the changes we've had going all the way back to the printing process. So, like, it's pretty much from an information standpoint. It's not that different, even though it may feel very different for us. So think of, it's much faster. But the unlock of capabilities that were not possible before is very, very similar, right? That's one. But the responsibility, as you put it, is not just with the platform. The responsibility is now shared.
Starting point is 00:20:13 And it's even more shared between the creators and the providers than it used to be before. because before you could buy something off the shelf and then say you will own the application stack and then you will build the application on top. To a large extent, the stack is not changing. Like I'll give you a specific example. You're building an application with React, for example. You have a bunch of databases behind that you have to go connect your data with.
Starting point is 00:20:39 Maybe you will have a data warehouse. The difference is taking two years to standardize your data, having your data engineers and data scientists bring you materialized views and your business analysts and your application developers working together with your materialized views and data scientists to come up with an application that goes through review cycles and tightening that to having your first version of your application with all the things I mentioned in your first week. Is it production quality? No.
Starting point is 00:21:14 Is it something that you can put out there to test internally? Absolutely yes. I don't know if you can say, though, that any longer that it's not production quality because production quality refers to adherence to a set of specifications. And, yeah, sure, two years ago even, as recently as two years ago, those requirements could be really quite strict. Now, I feel like it's really the scope of the application, the scope of an agentic AI, a use of agentic AI, can reduce those requirements a good deal, potentially, even in a highly regulated enterprise environment.
Starting point is 00:21:53 No? Am I going too far? No, not at all. Because before, if you had to spend all of the time and resources to build an application, your standards were high, and you wanted to make sure that they actually lasted. So the return on your investment you're making is really what the calculation is.
Starting point is 00:22:09 Today, if you're taking two weeks to do the whole thing, and if it lasts only for two months, you got more than the bankway buck. right and that's really where the delta comes in however the production readiness and getting to say quality of use is important because the other side of the whole thing is what we have to acknowledge which is AI slop like how many times you read an email and you go huh that was generated like it's just starting to to get on everybody's nerves to a point where you're going to get to a state where you not only want productivity you actually want to
Starting point is 00:22:45 quality. I'll give you my own example. Before, if I had to write a one-pageer or a two-pager for my team, I would have taken half an hour or one hour. I'd have researched, I'd have written something up, I go ahead and then. Today, I still take an hour. Sometimes the hour turns in two hours.
Starting point is 00:23:02 Not because I don't use AI. I actually use more than one-two, all over the place. And my research goes much deeper. My analysis of how this piece of writing is going to get absorbed is much, much deeper because I had to guess before
Starting point is 00:23:17 now I can actually do what-if analysis. And sometimes, depending on the time of the day, my what-if analysis gets out of hand. And like one hour used to, what it used to take, ends up taking two hours.
Starting point is 00:23:29 However, the quality of the content I produce is significantly more. Right? And there's not a word that I would put on a piece of paper with my name on it that I haven't read and agreed to. Right? So that's a quality level that individuals have to hold themselves accountable for,
Starting point is 00:23:46 whether you're writing a piece of paper, you're writing code, you are generating a PowerPoint, that needs to come through. And I think all of us will hold ourselves to that level of quality. At least I hope so, because like nobody wants to read an email that somebody didn't even write.
Starting point is 00:24:04 Right? And those are kinds of things that as a society, I think we have to get better. But even as enterprises, everybody is going to demand more quality. And that's really where the rubber really hits the road, because if you look at the kinds of applications we'd solve, it is not the give me 70% or 80% or even 90% accuracy. Our customers are, like for example, I'll name some of our public customers like Franklin Templeton. Their accuracy starting point was 92%.
Starting point is 00:24:33 And understanding tables above a 95% accuracy level was their criteria to meet. So those are not kinds of things that you would do if you didn't have a model that understands financial records that understands tables these are what I would call the non-cool aspects of putting something into production but that's really what moves the needle in the top last 10% or even the last month.
Starting point is 00:25:00 You know, I've been having a lot of conversations with companies in the agentic space about exactly what you're describing here and one of the things that was articulated if you'll forgive me for the pun to me about that was, the sort of compounding nature of AI errors. Effectively, if you have a chain of agents,
Starting point is 00:25:18 and if each of those agents are 90% accurate, by the time you get to the bottom of that chain, you could be much, much, much less accurate. You're less than 10% accurate at that point. If you change just six of those agents, each with 90% accuracy, your final accuracy is less than 10%. Yeah, and they also pointed out
Starting point is 00:25:37 that this sort of paradox that even as systems are getting more efficient and hardware is improving and the models are improving and we have distilled models and tiny models and tuned models. We're using more and more and more tokens. We're doing more and more processing because of these agents. Because once again, if you have a dozen different elements that are all talking one to the next to the next to the next to the next, by the time you get to the end of that chain, you have used way more than you might have if you had one big super AI thing. I'm not going to say general intelligence. If you had a big model, a bunch of small ones. So how do you answer that problem, the cascading errors and also the sort of
Starting point is 00:26:28 compounding effort required? So two ways to answer that, right? So one is to make sure that every response you do is grounded on actual data. That has to be like a non-negotiable condition. And if you're answering something that does not have a support with data, the user has to know at that point in time. It's a decision that they have to consciously make. The second one is after you've made a prediction, by the way, like this is something that I think even the practitioners of AI have forgotten, that every prediction is uncertain. Every prediction, has an error bar around it. So we need to know
Starting point is 00:27:08 what is the relevance of that prediction. And I'm using the word prediction very carefully here because all the outputs, just because it comes from an application or an agent, we have forgotten that is actually coming from a model, which means it's just a prediction. It could be just as wrong as any other prediction. And having the ability to evaluate
Starting point is 00:27:31 how good or bad that prediction is at every step. helps you. And you also need to have an overall evaluation that's independent of the systems that are actually causing the prediction to happen in the first place. It's those things where it's not just the amount of tokens that are getting generated. You need to have efficient compute to judge the tokens that are getting computed as well. And so, for example, if you take our platform, every prediction we make comes with a relevant score. Now, we cannot make an accuracy score, because we don't know the truth from the falsities. All we can say is, here is your data set, here is the grounding on the data set,
Starting point is 00:28:14 and given your data set and given your question, is this relevant or not? The accuracy question is the next question that has to come from, like, say, validation data cells. That also we provide, if you're talking about models. But having the ability to know that I'm getting this answer, and this answer has this relevant score, even from the system that is actually doing the analysis, is important because many times we forget that the tone of the response might suggest that it's very confident.
Starting point is 00:28:46 But actually, the prediction might be highly not confident. Also, the nature of generative AI in particular to be confident because, or to appear confident, because that's the training point. That's the training value point. You challenge it, and then it says, it seems to agree to everything you say. The number of times
Starting point is 00:29:06 I get told that I'm such a deep thinker and I've found the mistake gets on my nerves at least. Well, I'm actually the deep thinker, not you. I don't know what you guys are talking about. I'm the deep thinker, you know? Oh, goodness. Well, so I guess what's the prognosis for all this?
Starting point is 00:29:32 You know, what is your prescription, Dr. Arun, for companies that really want to deploy a system that is productive and accurate and reliable? What should they do? Apart from, I don't know, talk to articulate, but what else should they do? So the first thing I would say is please be cautiously optimistic. And it's important to be optimistic because these systems can legitimately do things that are, impossible even six months ago. So questioning whether this is real or not is really past us, it's behind us. These systems can actually help, irrespect of which platform you use. So that's number one. But you also need to be cautious because these are very, very early days. The tech is
Starting point is 00:30:20 evolving super fast. The costs are changing significantly. And the caution really needs to be just like how you would drive an autonomous car. It's still level three, meaning you are still responsible for whatever happens in the car, you are still responsible for, like, any kind of mishaps, meaning you need to still pay attention, but not using it would be a significant handicap for you. That's number one. The second thing is, if you're not careful,
Starting point is 00:30:53 costs can significantly go out of hand, but if you're careful, your actual total cost would be much, much lower. And you don't necessarily have to be a significant expert to go do that. Just need to be able to partner smartly with the right partners to do that. That's the second one I would leave people with. And the third one is you are going to see companies break out. And you're already seeing companies breaking out. Like we work with some of the thought leaders in the world.
Starting point is 00:31:25 They have thought through the problems. They've hit their walls. They've hit their like, say, boundaries. gotten help and gotten past them, and you can very clearly see them breaking away. And the breakaway is almost always when they combine their own domain expertise with an application that nobody in their own domain could do.
Starting point is 00:31:44 So the top-of-the-line use cases versus bottom-of-the-line use cases, right? My opinion, every enterprise is going to get more efficient. That's really bottom-of-the-line. If you want to stand out, you have to go after use cases that increase your top line. And if you do, you are going to significantly break away from your competition. And you've seen that in many industries. It's a fascinating discussion and eye-opening in the sense of what production means,
Starting point is 00:32:14 what an application is as it relates to what an agent is. So I'm going to have to get a little extra time afterwards to think about all of this. I think that's a great summary because it shows that we, despite the sort of naysayers about AI, there are practical applications. There are ways that companies can leverage this technology. And as you say, ways that these companies can break away from the pack.
Starting point is 00:32:39 So before we go, tell us a little bit more, Arun, where can we learn more about what you're doing? And I guess I'll start by saying, hey, how about tuning in at AI Field Day next week when y'all are presenting? Absolutely, Steve. I look forward to seeing you there. next Thursday and that's a fantastic place to actually look at all the different actual practical
Starting point is 00:33:03 applications. We have some exciting demos there as well. And we just had a huge refresher for website, lots of new information, lots of new case studies. We focused on what people can actually do today to improve their enterprise focus and sharpen their skill sets. So please check that out as well, and please hit us up on any of the social channels. Love to engage. And Guy, how about yourself? Well, one place you'll find me is definitely at online watching and commenting on AI Field Day next week, everyone. Looking forward to seeing you there. And also, actually, I'm a Delegate at Cloud Field Day, which is this week. So you'll see me on screen and in the socials there. The socials,
Starting point is 00:33:55 being LinkedIn.com slash in slash guy courier. You can follow me there. Blue sky at guycureureure. dot bsky. dot social. And as always, you can see my work
Starting point is 00:34:07 and see me at Futuramgroup.com. Thanks, Guy. And as for me, yes, I will be virtually joining you with Cloud Field Day and joining in person for AI Field Day. So sort of the opposite of you.
Starting point is 00:34:21 And of course, I would encourage you all to tune in to AI Field Day as well, we're going to be launching a brand new AI podcast for the Futurum Group. So that's going to happen on Thursday with a special live session. So thank you very much for joining us
Starting point is 00:34:37 and all of you listening. Thank you for listening to this episode of the Utilizing Tech podcast series. You'll find this podcast in your favorite applications as well as on our dedicated website, UtilizingTech.com. You can also, as Guy mentioned, find us on socials, LinkedIn,
Starting point is 00:34:53 X-Twitter, Blue Sky, Mastodon at Utilizing Tech. Thanks for listening, and we will catch you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.