Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 2x12: Balancing Data Security and Data Processing with Arti Raman of Titaniam

Episode Date: March 23, 2021

AI and analytics needs access to massive volumes of data, but we are constantly reminded of the importance of securing data. How can data be protected at rest and in flight while still enabling access...? That’s what Titaniam is enabling, and this episode of Utilizing AI features CEO Arti Raman, who tells us how they are able to provide access to data without leaving it wide open. They provide granular access according to the needs of the application, enabling access for processing on demand. This approach also protects data in use by researchers and developers, since they can not access the clear text data even while their system is processing it. This has practical applications for medical applications or when dealing with personally identifiable information (PII) in the face of GDPR and CCPA. Guests and Hosts: Arti Raman is CEO and Founder of Titaniam. Connect with Arthi on LinkedIn and learn more on Twitter at @TitaniamLabs. Chris Grundemann a Gigaom Analyst and VP of Client Success at Myriad360. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 3/23/2021 Tags: @SFoskett, @ChrisGrundemann, @TitaniamLabs

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Utilizing AI, the podcast about enterprise applications for machine learning, deep learning, and other artificial intelligence topics. Each episode brings in experts in enterprise infrastructure to discuss applications of AI in today's data center. Today, we're discussing the conflicting requirements to secure data, but also have data available for processing. First, let's meet our guest, Arati Raman. Hi, I'm Arati Raman. I am the founder and CEO of Titanium. We are a data protection company that focuses exactly on that trade-off that Stephen just mentioned. And you can find us at titaniumlabs.com.
Starting point is 00:00:46 And I'm Chris Grundemann. In addition to being the co-host here today, I am also a consultant, content creator, coach, and mentor. You can learn more at chrisgrundemann.com. And I'm Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT. You can find me on Twitter at sfoskett. So many people might know
Starting point is 00:01:06 that I have been in enterprise storage for most of my professional career. And one of the things that's always vexed us in storage is the fact that encryption tends to screw up everything. Essentially, as soon as you encrypt data, you can't compress it, you can't process it, you can't access it, you can't deduplicate it, you can't do anything with it. It becomes sort of like an inscrutable blob of bits. And this is a real issue now that we are in this world of analytics and analysis of metadata and trying to make use of data instead of just storing it.
Starting point is 00:01:47 So that's why when I talked to Arthi on the other day, learning a little bit about what she's doing and what the company is doing, well, first my jaw hit the floor because I was like, there's just no way this works. But then the more we talked and the more we learned about how it works, I was like, wow, this thing is really pretty groundbreaking and has a big implication for AI because machine learning and deep learning needs access to huge volumes of data, but we can't just open that stuff up and let anybody look at it. And not only that, but even if we think it's protected, it's probably a good idea to secure it on data, secure that data at rest, secure that data in motion, because you never know who's eavesdropping. So given this situation, can you tell us a little
Starting point is 00:02:31 bit about homomorphic encryption and what you all are doing? Absolutely. So technically, Steve, we aren't doing homomorphic encryption. Homomorphic encryption specifically speaks to computations on encrypted data, addition, multiplication, and such like. We decided to take a look at the problem in general and see if we could find an 80-20, a small piece of the problem that was tractable, something that we could manage with regular compute, regular storage in regular companies
Starting point is 00:03:06 without specialized infrastructure and see if we could carve that out and address that. And so in looking at that, we found that search, enterprise search, so the ability to index, query and retrieve information, which happens in our AI and ML world, but it happens before the computation part. That process itself is where we were seeing a ton of data compromises and data loss. So we thought, well, if we can combine transformation, which is sort of what encryption
Starting point is 00:03:38 does, the idea of a search index and protecting that, and then how that data is accessed, we could create a system that protects that data while it's in use. So adding to the data address encryption and the data in transit encryption, part of the data in use process, if we can encrypt that, then we can actually have a more than small impact on the problem. So that's the part that we decided to tackle. And in doing so, you know, we were able to combine some new algorithms with traditional cryptography on top of that to deliver that outcome. And that's what titanium does. Wow. Yeah, that is really, really interesting. And as Stephen said, a little bit mind blowing. So, you know, I'm definitely not the storage expert Steven is.
Starting point is 00:04:28 So maybe you can walk me through that a little bit deeper as far as, I mean, are you in order to do that, right? To protect the data while in use and through the search process and everything. I mean, does that mean replacing the whole file system or is this layered on top of stuff that people are already using? How do I actually implement this? How much do I have to rip and replace to get it there? That's a great question. One of the big challenges in our industry is this whole idea of rip and replace. We have so much
Starting point is 00:04:54 legacy infrastructure and data that doing the rip and replace actually limits the applicability of these new technologies. So what we are about going where the data is to the systems that already exist. So think of us as an engine that can sit in front of your data store or your analytics applications. We basically process your data on its way in. So clear text data comes to us, we process it, we send protected data into your analytics platform. And then we also provide you an engine to query that. Now in your average analytics platform, such as Elasticsearch or Mongo or some of those sorts of systems, we can just be a plugin. You drag us into the plugin folder, you reboot your node, and now you have us operational. In other places like S3 or some other cloud systems or specific proprietary
Starting point is 00:05:47 analytics ML systems, we would be a proxy. So you would stick us in front in your data pipeline, data will flow through us, it'll come out the other end protected, and now we will give you tools to process that protected data. So hopefully that answered your question, but if you'd like more information, let me know where to double click. Yeah, I think that it's worth kind of zooming in a little bit there and addressing the elephant in the room, which is basically this, like, you know, how can you make this access? Like, I mean, maybe it's too hard for an elevator pitch, but how do you provide access while still protecting data? Absolutely. So I can break it
Starting point is 00:06:26 down a little bit and just guide me on where you want more information. So today, a search index works in the following way. So you have clear text data that flows in. That data is then indexed. In other words, you take fragments of that data and you have pointers pointing to those fragments so that when you need to retrieve a record from a petabyte of data, you don't have to go through each single record. So this is the concept of a search index. Now, if you were to take that index and encrypt it, even if you were to encrypt those little fragments that you point to, you have the issue of frequency analysis, which is a common attack scenario, right? Where you can say, hey, you know, I'm going to find all these fragments. They look the same, even though they're encrypted, I can retrieve the encrypted fragments and try to deconstruct or reconstruct what that data looks like. So that's basically how your search index works. So what we do is we take that clear text and we run it through an obfuscation ahead of indexing. So apologies if it's too much detail,
Starting point is 00:07:26 but you take it in, you do this thing where you spread it around in a space that you create with a key, an encryption key. Now you have this obfuscated data where if you give that key to your search algorithm, you can find it. You take those fragments that you would then index, and then you encrypt those. So we're doing two sets of transformations, a spatial transformation to
Starting point is 00:07:51 obfuscate our clear text, then we fingerprint it for search, and then we encrypt each and every fingerprint. So every fragment, every fingerprint either goes through AES encryption or format preserving encryption, or a one-way hash, depending on what NIST standard is applicable for that use case. But we make sure that every little bit is encrypted and then it resists that frequency analysis because of the prior obfuscation that we did. So this works in the context of search. It wouldn't work in the context of your regular homomorphic computations, right, where you're going to do your mathematical manipulations because you don't have the benefit of a search index.
Starting point is 00:08:29 So in this specific scenario, we can attack that. But what it gives us is a ton of impact on where the compromises are happening, where the privacy situations are happening, where analysts and scientists are looking into that data store and querying data sets that maybe they shouldn't be querying, insider threats, things like that. Does that help? Yeah, it really does. What is the granularity that you're talking about here? You talk about fragments of data. How big are these fragments of data? So that actually depends on what the use case is. Sometimes companies want to index their data for prefix searches of, let's say, six characters.
Starting point is 00:09:07 Six characters in clear text would probably be three to four times in our obfuscated world. So we're looking at, you know, not too small a string, but things that end users might be searching for. So language engines work the same way. They fingerprint the data. So these
Starting point is 00:09:26 are standard taxonomies in terms of how it works. But you basically look at a clear text fingerprint, it turns into a large obfuscated string, and that entire thing would be encrypted. So sometimes engines will do this as engrams. You can think of it as the equivalent of a three-character clear text input that might end up being maybe a 12-character obfuscated string. Does that help? Something like that? So we're not doing one character at a time, if that's the question. Well, yeah.
Starting point is 00:09:57 I was actually thinking it's either one character at a time or it's like a whole row of a database or a whole object of a storage system. But it's definitely not. You're actually much, much more granular than that. We are granular, definitely. And the process also works at a field level. So we're not talking files, we're not talking drives, we're talking documents,
Starting point is 00:10:19 and within documents, we're talking about fields. So a single record that might go into an ML model might contain a ton of attributes about, you know, an item, a patient, let's say this is a healthcare database. So each of those attributes can be protected differently. They can be, you know, indexed differently and we can operate at that level. And so then in that case, and obviously there's going to be various different use cases here, but at the point where, so I can search while it's still encrypted because you've got these fingerprints that are encrypted there. And then once I find what I'm searching for, I mean, obviously depending on the use case, the next step is kind of different things. But in the case where I want to retrieve a bigger chunk of text, say out of a document around that search field, then I guess at that point it needs to be decrypted to be able to render to whatever system is reading that, right? Whether
Starting point is 00:11:09 it's an actual display for someone to read or a machine learning system that's trying to gather more text. Is that right? So that's another really good question. So the answer is it depends. So sometimes what comes out when you index and when you query a system is a document that needs to be in clear text for whatever needs to happen downstream, the entire documents. You'll take that source document, you'll encrypt it, but when the index retrieves it, you'll decrypt it and hand it back to the requesting system. At other times, the operations happening downstream are actually at the field level. So if you think of an ML model, for example, right, we're actually retrieving attributes. We don't really need them in clear text because even once they're back, we're still doing
Starting point is 00:11:51 things like sorting or ranges or grouping or, you know, operations like that on those attributes. And all of those lend themselves to being queried while remaining protected. Often there are bots or, you bots or scripts that run on these things where a human doesn't really need to get in front of that and read it in clear text. So there again, decryption isn't required. So what we try to do is basically we say, minimize your use of clear text, which minimizes your exposure, which minimizes your privacy issues and ethics issues and so on.
Starting point is 00:12:26 This is really where it clicked for me in terms of our discussion because, you know, in, you know, ML training, you know, one of the things I'll just zoom back to one of the things you mentioned earlier in the conversation. You know, we don't necessarily want our researchers and our developers and our engineers even accessing the clear text data. But the ML needs to be able to process it. And the clever thing here is that, as you say, ML doesn't necessarily need to access that data in the same way a person does. So, you know, basically the machine learning algorithm works just as well on obfuscated data as it does on clear text data. Is that right? Absolutely. And so what we would do in that case is provide a library of functions. So let's say you were, you needed to bin your data, right? Put people in groups according to specific attributes. we would actually provide you a binning function
Starting point is 00:13:25 that you can call. And in it, you will input your parameters in clear text. We will apply the key, apply the spatial thing, figure out what it actually needs to do on the back end. We'll do that translation and our library will implement that bucketing function on your data. Only at some downstream stage where maybe you need to unveil, hey, who is this patient? We figured out something that we need to inform them about. At that point, you can unveil that or unmask it and then do what needs to be done. And that actually is where this really gets practical. Again, this is utilizing AI, where we talked about practical applications. Well, a lot of companies that are trying to use AI
Starting point is 00:14:08 and machine learning to process analytics are processing very sensitive data. You mentioned health data, for example. I can also obviously think of financial data, personally identifiable information. I mean, there's obviously big restrictions coming in California and in Europe with GDPR. And a lot of that data is the data being processed by machine learning. And there are very, very valid concerns among many companies about enabling that access to that data, especially when you're talking about a data lake that might mix and match data from various people, from various sources, with various levels of permission, well, you just can't do that. And that's, I think, the most interesting thing here, because in a practical sense,
Starting point is 00:14:53 you are able to address this problem, right? Absolutely. So the point is, if you are able to de-identify, that's great. You should do it. If you're able to make do with synthetic data, fantastic. You should do it. But many, many times the AI that powers real models, real applications, real outcomes, it needs real data. Now, often these are real-time models, right? Data streaming in, we're sort of processing in the back end, we're producing outcomes. So, you know, what about those scenarios?
Starting point is 00:15:25 In those currently, all of that operates in clear text, which is why it's really scary, right? When you have persistent attackers on your networks or, you know, data leakage that goes on for long periods of time, you know you have this clear text, you know, behind a simple access control wall. So this is why it's pretty significant, in my opinion,
Starting point is 00:15:46 that we should be able to minimize the clear text and give our scripts and our algorithms and our AI some functions that can work on encrypted data. Yeah, absolutely. And I've read some articles recently about just how hard it is to de-identify data. There's so many aspects. It's not just, you know just stripping names off of things.
Starting point is 00:16:07 There's other characteristics that in combination can really easily identify someone. And so this seems like a really obvious or practical step to move towards avoiding that trap of worrying whether or not that data can be identified by a researcher. Yeah. Well, the one thing that I definitely don't want to overstate benefits ever, because the one thing that we don't get away from is contextual data loss, right? So if I have access to this data set that let's say is encrypted and obfuscated and hidden from clear view, but I do have the power to query the system.
Starting point is 00:16:47 I can design queries that are very narrow, right, where when I get those results back, even if I cannot decrypt them, I can obviously, I know what attributes are in there. So that contextual data loss is just an artifact of the ability to search. We can't get away from that. But we try to do things like rate limit, you know, so we get queries back, but we limit how much analytics you can do on top of that query and other such security controls that you can apply. But there's no getting away from, you know, you can't take a scientist, stick them in front of a data set and not expect contextual data loss, you're going to get it. Yeah, I think a lot of people who are looking at, you know, data breaches and,
Starting point is 00:17:27 you know, API access, you know, even like, you know, SQL injections and things like that, you know, having a rate limit on it did on the data, having the data be obfuscated, have the data protected that can that can reduce the blast radius of these attacks. And it's amazing. The more I think about this, the more I can see applications in many, many different fields. Yeah, absolutely. So AI is all the talk these days, right? So definitely having the conversation about AI is great, but there are simpler applications as well, like customer support systems, really simple. And we now have AI on top of that, right?
Starting point is 00:18:05 We have bots, et cetera, that process that data. But how do you retrieve a customer record without indexing PII? You cannot. There's no synthetic data that helps you. There's nothing. You need those customer records. You need to index by name, phone number, email, whatever.
Starting point is 00:18:20 And so when, you know, Microsoft lost, I don't know, tens of millions of those from a customer data store, it wasn't a surprise because you have thousands of people that have access to this thing and it's going to get out. So those are perfect examples where that data never, ever needs to be in clear text. Yeah, and it's great. And obviously, this layered encryption while in use, it helps with confidentiality. Yes. Is there also an aspect of integrity here where I can ensure that data hasn't changed while it's been at rest? Or is that kind of out of scope of what you're doing here?
Starting point is 00:18:56 So that's, I'm glad you brought it up. So I have like an ancillary sort of connection to make here. So we recently created something called a valet key. So you know how encryption works with keys, right? So this idea of a valet key where you can use it to do search and analytics, but you cannot decrypt. Now what we use under the hood after our obfuscation there is actually one-way hashing. We use SHA-256 to do that conversion, but it also gives us an integrity benefit, right? Because as you know, right, hash comparisons is what we use to demonstrate integrity.
Starting point is 00:19:31 So now I have this capability of taking my data set, my valuable data, giving it to a third party, giving them a key, but I can also, you know, assure myself that the data isn't tampered with, that, you know, what I'm getting back actually is working on the actual data that I provided and I can have some integrity assurance as well. So not exactly what you asked, but definitely relevant in this space. Yeah, absolutely. And then, you know, maybe a follow on, not really a follow on, but another aspect here, we've been talking a lot about kind of clear text and keeping that encrypted. Does this work on non-textual data? I mean, we're talking like photos, videos, you know, other types of data that isn't represented in human language. Yeah, that's a great question. So this operates at the bytes in bytes out level. So the algorithms
Starting point is 00:20:21 aren't, you know, specific to formats. However, my team currently aren't like imaging scientists that can look at areas and images and identify. But if we were to have that kind of input or something that we can lean on to make those comparisons, at the end of the day, it's just pattern matching. So when we started Titan titanium, we actually started with a biometric application in mind. We were really worried about all of the breaches that were leaking fingerprints and retina scans and things like that. So we started there. The technology can do that. We've ended up here now, but there's nothing in the core technology that won't allow that as long as it has the benefit of some domain expertise on how to
Starting point is 00:21:06 recognize those patterns. Well, speaking of biometrics, I mean, that immediately brings to mind one of the most popular machine learning applications out there, which is facial recognition in crowds and so on. And if you can obfuscate faces but know, productive processing of that information. I think that's a very valuable and also a very positive development, right? Yeah. So the nice thing about this question is that I know that a face print actually is an equation. It's like a set of measurements behind the scene. So now that I know that, I can tell you that yes, we can match on those things.
Starting point is 00:21:48 Because a face map is essentially measurements of different points in your face that gets recorded as a set of fields, a set of attributes. We can take those, we can protect them, and we can match against them. Definitely. Now the face itself as a photograph, likely not. But the facial recognition, probably yes. And that actually leads back to the point that you made about the valet key. This is another really interesting idea. You know, for those of you who don't know what we're talking about, some
Starting point is 00:22:16 high powered and fancy and expensive cars, and actually, I guess some regular ones now to have a valet key, which is, you know, it'll only operate the ignition. It won't open the trunk. It'll only open the driver's door maybe. And in many cars, high-end cars, it actually limits the ability of the engine. It limits acceleration, all those things. So think about a valet key for data, which would enable basically data use, but only within a, you know, sort of constrained boundary like that. I think that that's a really interesting thing too, because again and again now, especially with machine learning algorithms, we're asked to basically share data from sort of a source data set into either a developer's space or into a customer's side of things. You know, having a valet key that would allow them to access data, but only some data,
Starting point is 00:23:17 it could actually be a very valuable addition for a lot of different applications in machine learning and also collaboration, just regular research and collaboration. But let's talk about businesses. Let's talk about how businesses could use a valet key to data. Yeah, absolutely. So third party data risk is so top of mind these days, if you've been following the news, right? We just have supply chain compromises
Starting point is 00:23:43 that impact larger enterprises. And a lot of times, this is companies that have taken bits of their core data and given it to a third party to do something. It could be payroll processing. It could be analytics. It could be anything like that. So in those cases, sometimes we need PII just to index and identify what's happening. So in those cases, right, can be assured that they cannot decrypt that field that we have protected.
Starting point is 00:24:29 They can index it, they can search it, but they cannot decrypt it. And that gives a ton of comfort to a number of companies that are reeling from the impacts of supply chain-based compromises. So definitely a lot of application there. We're super excited about this. Yeah, that's very interesting. And then the thing that pops to my mind is, you know, there's definitely this ongoing challenge
Starting point is 00:24:50 with a lot of folks who are working in ML and deep learning around just having access to the data sets they need, right? And so you have this issue of incumbency where if you've got the data, if you've already collected the data, you've got a big customer base, you have this advantage in developing AI.
Starting point is 00:25:05 And I wonder if something like this, the valet key specifically, will perhaps lead to data sets being a little bit more open, where I can actually use the data to train my model without having full decryption access to it. Maybe you're a little bit more willing to share with me as a little bit of coopetition there. Yeah, I would imagine so. So I am not an AI research person by training, so I don't definitely want to speak out of turn.
Starting point is 00:25:32 But in terms of what I can understand, it opens up the possibilities of enterprises comfortably sharing data sets with third parties and providing them capability, limited capability on fields that they consider valuable. And I imagine this has to be foundationally super interesting for AI. I've talked to a number of people that have been involved with some compromises on AI systems where the data set got leaked and that created problems on the healthcare side. So I know those guys would
Starting point is 00:26:03 appreciate this type of capability, but I imagine from a broader sense as well. Yeah, I was actually thinking the other direction, which was basically the developer of the AI model having to share some data with their customers. But, you know, Chris, you bring up an interesting point. You know, it could go in the other direction as well, because increasingly customers are being asked to share information and metrics and telemetry back to sort of the mothership. And this way they could protect that data while still allowing a vendor to process it or to work with it in the other direction, not just from a vendor out to customers, but from a customer back to vendors. Yeah. Yeah. No, absolutely. I mean, I don't know if you guys have thought about it, but from a customer back to vendors. Yeah, yeah, no, absolutely.
Starting point is 00:26:45 I mean, I don't know if you guys have thought about it, but, you know, providers such as companies that provide us email, you know, we all use cloud-based email providers. Did we know that those backends, the administrators of those backends can see every message from all of us, right? How then they needed to do their jobs. They have to administer those systems. Our data sits in these super large enterprise search backends, and that's how it's served up to us. So simple applications like that, and they optimize it. They optimize based on our metadata. So it would be nice if we had some protection on our private
Starting point is 00:27:23 data. Yeah, that's a really, really good point. I hadn't even thought about the consumer end of this, right? I was lucky enough to learn about computer science from some security experts. And so I was always taught that anything you put into a computer, assume that someone else can see it. But I think, you know, especially the newer generation, my kids definitely don't think that way, no matter how many times I tell them. So, you know, just knowing that that's back there would be a little bit of peace of mind, for sure.
Starting point is 00:27:47 That's interesting. Yeah. Well, thank you so much. I mean, this has been such an interesting conversation. And I know that our listeners probably learned something at some point in here. So as we normally do at the end of the podcast, I'd like to ask you just a few questions, uh, generic, uh, kind of open-ended questions here, uh, have a little bit of fun with this. Um, I know that you're not an AI expert and I'm not going to approach you as an AI expert, but you are a very intelligent person who deals with data on a daily basis. So I would love your ideas about some of these things.
Starting point is 00:28:24 So let's, let's go ahead with, with a couple of go ahead with a couple of these fun questions. So again, not as an AI expert or anybody in the field of self-driving, when will we see a self-driving car that can go anywhere at any time? If I were to guess, based on the rates of technology development that would be foundational to something like that, notwithstanding legislation and public concerns, I think we're within keep in mind are literally those two, legislation and public concern. And those things can sway outcomes pretty significantly, but that would be my sense. Okay, great. Next point, and maybe you have more insight into this because it's also true of big data and data science and analytics. Is it possible to create a truly unbiased AI model?
Starting point is 00:29:27 I do not believe so. I do not believe so. I think that the nature of the problem lends itself to inherent bias. All models are based on assumptions. Assumptions are coming from humans. And while I think we can remove bias based on learning and outcomes that we feed these models, I think it's somewhat of an asymptotic function.
Starting point is 00:29:49 Like we can get close, but I don't think we will get there just based on the mechanics of how these things work. I tend to agree with you on that answer, and I think that's a very good way to put it as well. Finally, can you think of any jobs that will be completely eliminated or configuration or study the behavior of multiple systems that are under the control of administrators and not having anything against systems administrators. I think they'll be doing other more intelligent things. But I think here's where learning and continuous learning and modeling would really help us. I mean, think about the number of times we put in large systems and enterprises and we tune them once. You know, we do the configuration and then we leave. The environment changes, the world changes, the, you know, attacks change, but our configs stay constant. So I think those sorts of jobs are better suited to algorithms than they are to people.
Starting point is 00:31:07 So I know it was a little narrow answer to your question, but... Well, actually, that was great. Nobody said that yet. And it really cuts me because that was my entire professional career before I started becoming a talking head on videos. Sorry. Well, that's okay. That's okay. That's what we're here for. Well, thank you
Starting point is 00:31:26 so much for joining us. It has been really wonderful talking with you and learning from you about this. I'm wondering, can you give us a moment to tell us where can people connect with you if they'd like to continue this discussion or if they just like to learn more about the titanium solution? Awesome. Yeah. LinkedIn is a great place to connect with us. We have a really strong community and we do lots of events, et cetera. So I am at Arthi Arora Raman. I have my maiden name stuck in there and our company is titanium, titanium spelled with an A, not a U. So T-I-T-A-N-I-A-M. And you'll find us on LinkedIn and it's a great place to just drop us a line and ping us. Yeah. You can find me on Twitter at Chris Grunderman or online,
Starting point is 00:32:13 chrisgrunderman.com. Yeah. I want to give you a shout out too, Chris. You got a great new website there, chrisgrunderman.com. So thanks for pointing that out to me. And of course, you can find me at S Foskett on most major social media channels. I record the Utilizing AI podcast and we publish a new episode every Tuesday morning, Eastern time. So please do tune in for this weekly. Also the next day, Wednesday,
Starting point is 00:32:39 please tune in for the Gestalt IT Rundown where Tom Hollingsworth and I give you a look at the week's news, and you may see a familiar face on the rundown in the future as well, while Tom is busy with other things. So hopefully, we'll be able to bring the rundown to you on a regular basis, even when people are busy. So thank you so much for joining us here for the Utilizing AI podcast. If you enjoyed this discussion, please do subscribe, rate, and review. No kidding, rate and review. It really, really helps. And please share this
Starting point is 00:33:11 show with your friends. This podcast is brought to you by gestaltit.com, your home for IT coverage from across the enterprise. For show notes and more episodes, go to utilizing-ai.com or find us on Twitter at utilizing underscore AI. Thanks, and we'll see you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.