Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x01: A Look Back at Season 2

Episode Date: September 7, 2021

Welcome back to another season of Utilizing AI! In this first episode of season 3, we are taking a look at some of the most memorable moments of season 2. We started season 2 by talking about AI ...as a co-pilot in the first few episodes and this theme continued throughout the season. AI making our jobs easier was a common discussion we had through the course of the season. Another common discussion had throughout the season was how to make implementing AI easier through tools and platforms. We also discussed the duality of working in AI vs. working on AI. Having AI be more accessible and easier to use was yet another common theme we saw throughout season 2. Some of the most memorable guests that have stuck with our host and co-hosts include Saiph Savage, Sofia Trejo, Ayodele Odubela, and Anti Raman. Speaking of guests, Frederic Van Haren, who is one of our show’s co-hosts, was an early season 2 guest. Our most listened to episode was the discussion we had with BrainChip. Three Questions This season, we are continuing with our three questions tradition but we’re throwing in a twist! We are offering the opportunity for our guests and our listeners to pose questions that we may use in a future episode. Each guest will be asked to record a question that may be used to ask a future guest. We also want to offer our listeners the opportunity to become a part of the podcast. If you would like your question asked, send us an email at Host@Utilizing-AI.com and let us know you would like to participate! Hosts Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett.  Date: 9/7/2021 Tags:  @SFoskett, @ChrisGrundemann, @FredericVHaren           

Transcript
Discussion (0)
Starting point is 00:00:00 I'm Stephen Foskett. I'm Chris Gunderman. I'm Frederik van Heren. And this is the Utilizing AI podcast. Welcome to another episode of Utilizing AI, the podcast for enterprise applications of machine learning, deep learning, and other artificial intelligence topics. This is the first episode of season three. That's right. We have been recording Utilizing AI now. We're coming up on
Starting point is 00:00:31 our third year, and we start a new season when we basically have a new concept for the show. So we're going to talk a little bit about that today with the co-hosts here, Chris and Frederick, and we're also going to take a look back at what we've looked at, what we've learned, what we've heard in season two of Utilizing AI. So before we start, I'll just point out that we do have all of these episodes online at utilizing-ai.com. And you can see a list of all of the season two episodes right there on the website. And actually, it's quite a few of them. Thirty one episodes that we recorded last year together.
Starting point is 00:01:13 Chris, Frederick, what do you think about season two? Name one episode that you think really kind of stands out in your mind. Let's start with Fred. Yeah, I wouldn't say there's one particular. I think what I liked about season two is that it was across verticals, different verticals, right? We had some people that were talking about C executives, the C suite. We had people talking about the more the developer ML community. So I don't have a particular preference, but I do like the spread and I'm pretty sure that the community and the audience also likes to hear from different aspects from AI, certainly because there is still a lot of clouds over the definition of AI. And I think the more people understand what's happening across the board the better they have an idea on
Starting point is 00:02:06 what AI is all about. How about you Chris what do you think was your a memorable episode? Yeah so again I'm going to cheat a little bit as well and say that there's really kind of a trifecta of episodes that at least I had a lot of fun with. And that was kind of reverse chronological order. The one with safe Savage and the one with Sophia Trejo and the one with, uh, I Adeli Odabala, which all three were, um, you know, really interesting conversations that were a little bit less about enterprise technology and a little bit more about the implications of AI more broadly on our culture and our society. And while I do enjoy the episodes where we get deep in the weeds of technology, I also think it's really, really important at this point in the development
Starting point is 00:02:50 of AI to have that kind of zoomed out view of, you know, what's really going on and what the possible implications are beyond just, you know, getting somebody's quarterly revenues higher. Yeah, absolutely. And for me, I have to agree that the ones that really stood out in my mind, you know, when I was thinking of this last season, were the ones that challenged me to think about things in a new way or think about something I haven't even considered. And so that way, I'm going to call out Sophia Trejo. She gave me so much to think about that I wish we could have had an hour to talk to her about it. And just as a reminder, her point was essentially the inequality, the global inequality of AI
Starting point is 00:03:32 applications, not just applications and computation though, but basically access to AI, access to being part of this discussion, being part of this story, because I hadn't really thought of that. And I think that kind of betrays my sort of first world, you know, white guy mentality that I hadn't really considered the fact that everything we talk about is Silicon Valley, you know, New York, London, you know, Shanghai, you know, it's not happening in the third world for the most part. And that that's going to just accelerate this, this inequality of, of access. And as you mentioned too, Chris, you know, we heard as well, a lot about sort of ethics and bias and the fact that AI applications aren't necessarily reflecting the face
Starting point is 00:04:25 of the world population. Right, it's, you know, AI is like looking in the rear view mirror, right? So whatever happened in the past is really what you're gonna see. And then it's also segmented, right? Depending on like where you look like, for example, like you mentioned, Stephen, you know,
Starting point is 00:04:43 New York City or other areas, right? So the real challenge is how do you bring a more multicultural and a more interesting view than collecting your data from a particular segment? It's not easy because you have a tendency to stick with what you know, as opposed to kind of doing the right thing, because that is a lot more difficult. Absolutely. And the reason I picked those three out, too, was because they all kind of interweave in interesting ways. To Stephen's point, right, the talk with Sophia was really, really eye opening in that, you know, she asked, you know, where is this AI development happening? Where is the research happening? happening right what universities are funding this um you know how do you even you know get access to a lab to be able to try things out right i mean a jupiter network or sorry jupiter
Starting point is 00:05:34 uh notebook um you know it isn't something that just everybody has access to especially in the global south um but but then that was kind you know, played against the conversation we had a little bit later with Safe Savage, where she was showing some really positive aspects of AI and the new kind of infrastructure that can be built around an individual for the gig economy and how, you know, if you take it from an empowering point and people do the right things around government and businesses and everything to support that, that we actually have some really interesting ways to work that weren't accessible to them before AI was on the scene, right? And then kind of wrapping that back around to Ayodele and the idea that, you know, bias can be put in, which I think obviously comes from potentially, you know,
Starting point is 00:06:19 those same things of that global inequality, but can also be something that's, you know, I think when people think about bias, they often think about, you know, racial issues or class issues or gender issues. And it turns out that, you know, you can have bias in almost any data, even if it's just about, you know, French fries or milkshakes. And this can cause, you know, really interesting effects in your, again, in your quarterly results or in the way your life is lived? Right. I think it's, you know, AI, the best data you collect in AI is from people using the system, right? You can only hope that people might start with an AI model that might be slightly biased or biased completely. But as people start using it, the the bias is kind of being mathematically erased slightly every time people use it um i think and also another observation is a lot of the ai research
Starting point is 00:07:14 in the past came from colleges and universities where where the assumption was a lot more open minded than than ai models and ai frameworks being generated now by larger organizations, you know, not to name them, but where some organizations are really kind of monetizing your data and who knows what data they put in the models and how they integrate that into products and who controls that, right? And that's another point is data lineage, right? How do you, when you have a model, how do you understand where data is coming from?
Starting point is 00:07:49 Can you point at the different data sources where it's coming from? Because that also helps kind of detect the level of bias, right? Can you look at a model and say, is this model biased or not? Because we can talk about bias and not being biased, but how do you measure it?
Starting point is 00:08:04 How do you figure that out? How do you go being biased, but how do you measure it? How do you figure that out? How do you go back and fix it? Can you fix it? So let's actually take a little bit of a stroll through the season two and kind of try to pull out some of the themes that we got. And one of the things that occurs to me is we actually got one of the biggest themes of season two in the first episode of season two, which is this whole theme that we kind of get back to again and again, which is AI is my co-pilot. So way back then, we talked to Steve Salinas about information security, AI-driven information security. And one of the themes of that episode wasn't information security at all. It was that AI is part of everything we do. The AI doesn't replace us, but it augments us. It
Starting point is 00:08:52 augments our work and it helps us to do new things. And that came back again and again and again throughout the all of season two. So we kept revisiting that theme of AI being my co-pilot, you know, in episode, you know, well, actually we'll get to them, but yeah, there was a whole bunch of instances where, where we, where we ended up talking about AI as my co-pilot. Another thing that came out right from the start was the idea of connecting basically how do you connect AI into the business? So that was episode two with Monty Zwiebel from Splice Machine, episode three with Per Nyberg from Strategy AI, episode four with Ken Groh of Weka. These were all episodes about basically connecting AI into enterprise applications, which again, you know, I love the ethics and bias in AI episodes. I really do. And as you just heard,
Starting point is 00:09:57 those were some of my favorites from the season. But the truth is that the majority of our listeners are spending their time trying to figure out how are we going to implement this stuff? That's why we called it utilizing AI. Like, how do I implement this step? How do I use it? You know, from my perspective, episodes like, like the, the, the safe savage episode are a way for us to kind of put ideas into the minds of the people that are implementing these things, make sure they're thinking of these things. Whereas some of these others um are ways of basically helping them to understand how things are running right is is that kind of how you see these uh this
Starting point is 00:10:32 sort of duality of episodes uh absolutely and again i think like anything i think there's a spectrum there um which which again right is that um i think that there are some parallels here so let me walk through one of them, right. So I think you're right that this AI as your co-pilot has absolutely been a big theme throughout season two. And just even outside of this podcast has been a big theme, I think, at least in my kind of coming to terms with AI and what it's actually going to mean in the long term. There's a lot of people out there who are worried that AI is going to take their jobs. And you know what, in specific instances, that may be true. But I think in most cases across the board, AI is something that's going to augment
Starting point is 00:11:08 human intelligence. But there's an interesting kind of flip side to that, I think, which comes came through in some of the episodes, like I think, 16 with Foghorn. And there was a few like, I think, Snorkel AI too, in episode 23. And there's several others that I'm sure we can kind of pull out and pick out that are building things to make AI easier to use and more accessible to more people, which is almost the flip side, right? Because we're talking about AI making your job easier and making your life easier. But right now, what's going on is that a lot of startups out there working to make AI, period, easier.
Starting point is 00:11:42 And so some of this is in the industrial space where we're saying, hey, there's these operational technologists who are, you know, been making sure the machines are running in factories for a long time. And now we're adding AI in, how do we ensure that people who actually have the domain knowledge to get this stuff done are able to use these systems? And I think that has corollaries with some of the other things we've talked about, which is, you know, data validity and bias and some of these things that become bigger cultural issues. Those are issues in using AI to create enterprise applications. And so, you know, again, there's a kind of a spectrum here, I think, that kind of goes back and forth. Yeah, I look at it as the sessions being
Starting point is 00:12:19 or educational or providing tools to build your AI models, right? The educational piece is where people, you know, where you explain to people that AI is not voodoo, you know, being the co-pilot, not trying to be the pilot by itself. And then the tools where, you know, you mentioned WCAG as an example, where this is a high performance file system. So if you're in the business of building AI models, then here are the tools you can use. So I think the combination of the both is really what makes utilizing AI strong in the sense that it not only explains how to get there,
Starting point is 00:12:58 it also explains what you have to look for. And also what it doesn't is, right? It's sometimes people are looking for a definition a positive definition as opposed to you know what is it not and i'm sure you're going to call this out steven but another kind of one of these dualities that i saw throughout season two was um this level of of kind of working in ai and working on ai and what i mean is you know there was episodes, for instance, like episode 15 with NVIDIA, or I'm kind of looking down the list here. I
Starting point is 00:13:29 know we had a few others who kind of talked about, Liquid was another one in episode 17, that talked about kind of the infrastructure needed to build AI on top of. So we're kind of covering all these bases where we're talking about, okay, what is the physical and virtual infrastructure you need to put in place to be able to build AI applications? What are the other applications you need to enable AI applications, right? And then what are you going to do with these AI applications and what does that mean for your business and for the world? And so we're kind of covering all the pieces there, it seems. Yeah, absolutely.
Starting point is 00:14:02 And I think that that's what we were really trying to do was to cover all the bases and, you know, the whole time for what it's worth. When we're kind of scheduling these things and figuring out what the topics are going to be and who the guests are going to be and so on. audience member, and probably, you know, you who's listening, are basically somebody who's out there trying to do this, trying to do this thing. And we're trying to help you to understand, you know, kind of what it is, how it works, how to do it, what to keep in mind, etc. And not have it be just sort of a wonky discussion of AI. We want it to be really focused on the sort of the practical applications. So the next thing that I did want to bring up, though, is we had a absolute bang up episode this season. Can you guys guess what episode has by far our most listeners? Any ideas? I mean, just because we haven't mentioned it yet, and it is kind of an
Starting point is 00:15:07 exciting area of AI that people will think about a lot. I'm going to go with maybe the IBM and B Plus episode 18 for the how to teach self-driving cars how to drive. Self-driving cars. Frederick, what do you think? It was the brain session, the brain chip people, I believe. Yeah, it was the brain chip people. Yep. Yeah, and that, in fact, that one had so many listeners that I actually reached out to brain chip to find out what happened there.
Starting point is 00:15:37 Like, what did you do? And well, the answer is that they've got a ton of fans. They've got like super fans. And if you guys are wondering, you know, maybe people are listening or noticing this, that anytime you mention brain chip, there's like a ton of, of attention. Well, the answer is they're not spamming. They just have these super fans who, who are absolutely like going nuts. And they kind of themselves don't even know what's going on with this, But it's kind of like anytime you talk about Tesla,
Starting point is 00:16:07 people come out of the woodwork or SpaceX or whatever, people come out of the woodwork and they have to fan at you. Well, apparently it's like that with Brainship. So that was kind of fun. I think that the concept of Brainship is a lot easier to understand and see products you can build around it than a Tesla car, for example, right? The Tesla car is complex. It's a single block and, you know, there's not much you can do otherwise with the technology
Starting point is 00:16:34 from Tesla, except maybe, you know, the battery or the energy concept of out of it. With BrainChip, it's small enough that anybody can come up with an idea that is useful. And I think it's also the fact that they can grasp it, right? It's much easier. There's an easy software development kit. It doesn't cost a lot of money. And so it's really easy to get started and to visualize it. I think that attracts also a lot of people is how do we get started in AI, right? I mean, those are questions we frequently get. It's not an easy answer. But if you have a product like BrainChip, it's much easier to point that than say, try it out, you know. Yeah, and just that idea of metaphor, right, which, you know, not to go down this rabbit hole on this episode necessarily.
Starting point is 00:17:22 But, you know, one of the things I think that humans are still able to do and will be possibly forever is, is this idea of kind of framing and putting things in context. And one of the ways we do that is through metaphor. And I think brain chip is just this great metaphor when, you know, you're essentially putting a brain on a chip. It's even right there in their name. And when you dig under the hood, that's literally what they're trying to do, right?
Starting point is 00:17:40 It's putting neural networks into really low cost, low energy chips. You can put those out at the edge anywhere. And I think that to your point, right, that's just a concept people can wrap their heads around it and kind of get it right away and have that light bulb moment, which definitely makes it fun to hear about. Well, also, I mean, frankly, chips are hot. You know, we got a lot of listeners to our Intel episode, our NVIDIA episode, you know, chips are hot. People love listening to chips, chip talk, and people are interested. So another thing that comes up in my mind, we had a guest named by the name of Frederick Van Haren talking about transfer learning. That was that was fun. You know, Frederick, that was what did you think of being a guest on the show? Well, it's always different to be on the other side, but, but, no, I liked it. I mean, it's, it's, it's always good to be able to talk a little bit about the technology you're working on, because it's something you, you vastly understand, and you
Starting point is 00:18:37 have an opportunity to connect people and, and, and kind of explain what's happening in the, the middle layers of an AI model, right? So, and I think being able to try to explain that helped me also understand better what people are asking for, right? When you ask me questions, I have a better understanding of where you are and how you connect the dots than me having an understanding of transfer learning and trying to communicate that to you. I mean, it's, it's a, it's like the worst thing you can do is put me in a room,
Starting point is 00:19:11 talk to nobody and say, write me a five page, you know, document on transfer learning. It probably will be horrible, but I, I do like, I do like the interaction. I think AI is something that is so misunderstood that the more we talk about it, the better it is. And trying to explain something, some concepts like transfer learning, what is transfer learning? Why is that important? Why are we doing that? Why can you as a startup use transfer learning and not having to spend as much energy and money to get something working. So I definitely like it.
Starting point is 00:19:50 It's going with the punches. I guess as a guest, it's going with the punches, while as a host, you're kind of more in the driver's seat, which gives you a little bit more control. But I think the interesting piece is it took me out of my comfort zone, which I actually surprisingly like. Does that make me set a masochistic? I don't know, but it does definitely help me understand better what's happening around me. Yeah. It's interesting to hear what it feels like to be on the other side of the table, as it were. And as for me, you may know that my background is in storage so we mentioned Weka before we also had, you know, other companies on talking about storage and data management which was, you know, kind of a kind of a theme for me so you know we had Splunk, you know, talking about data management we also had, you know, Scality, DDN, you know, Concentric IO, and Titanium. And this was an interesting one.
Starting point is 00:20:53 And actually, we keep coming back to this topic too, homomorphic encryption and the application of homomorphic encryption to AI. So I don't want to talk out of school, but I suspect that here in season three, we're going to be talking about homomorphic encryption more than once, because this is a topic I think that really matters to AI. And for those of you who kind of missed that episode or missed that topic, essentially the idea is how can we do data processing on data that is encrypted, not making it easy to decrypt, but processing data while it is still encrypted. And that is mind bending if you're a data guy like me, but it's true and it's a thing. And so that's what we talked about with titanium. Chris, you were the co-host on that.
Starting point is 00:21:42 Do you remember that as mind bending as me? Oh, absolutely. I mean, the concept is 100% mind bending. My mind is still bent. I'm still trying to wrap my head. I don't think I'm a good enough mathematician to ever truly understand it. But even just the concept, I'm coming to terms with it. And it's amazing. And that episode actually stands out in another way which um this is always hard to do because we've you know everybody who's on comes on the show is really really intelligent and it comes from a place of power in this industry but arty was what was one of the you know smartest people i think i've ever talked to and there was another episode that made me feel
Starting point is 00:22:18 the same way which was dennis abst from grok um and that was a you know a really deep conversation with a really intelligent human being that i thought was great. I mean, like I said, that happens every week, but those two really did stand out as far as just the depth of their knowledge and how apparent it was in those conversations. How about you, Fred? Was there a certain person that stood out to you as, wow, this is somebody that I really want to learn something from? I wouldn't say a person. My background is technology, so I have a tendency to gravitate around interesting technology.
Starting point is 00:23:00 But the brain chip, if I had to say somebody, it's the person from the brain chip presentation because it's difficult to to explain ai and bring that into a product that is can be easily consumed it's very difficult to explain that and i and i know that from my own experience so i i appreciate anybody who who has the ability to understand the technology and also has the ability to simply explain it, what it is, what can be done, and what others can do by that, right? I mean, and I agree with Chris, there's a lot of very bright people in AI, but I do appreciate somebody who understands it and can simply explain it. Yeah, totally. Absolutely. I'm right there with you. And that kind of actually jogs a thing in my thought. So at the end of each episode in season two, we did our three questions
Starting point is 00:23:58 where we asked the guests three surprising questions that they weren't prepared for, kind of open-ended things, to have them give us an answer about the future of AI. And some of those questions were things like, when will we have a full self-driving car? When will we have a movie-style general-purpose artificial intelligence? How small can AI get? How big can AI models get? And we got some really interesting answers there. I thought that I would actually take a moment though, and revisit a few of those three questions, because I don't know if we need to retire them, but there were a few questions that we actually got universal answers to.
Starting point is 00:24:40 So let me throw this to you guys, because you are the co-hosts and you heard the answers to this. What is the correct answer to this question? Okay, so here we go. Here we go. And this is a prize, just like our three questions. It's a prize for you guys. Okay, number one. Frederick, when will we see a full self-driving car that can drive anywhere, anytime?
Starting point is 00:25:03 I was going to say four or five years from here, from now. Can I buzz in? Yeah, buzz in, Chris. What was the consensus answer? I think the consensus answer is possibly never. Right, right. If the assumption is that you're driving on a highway and suddenly an airplane needs to make a emergency landing, obviously that's something, you know, that the AI model wouldn't recognize, right? It's what kind of a bird is landing here, right? In front of us, what do we do?
Starting point is 00:25:36 Yeah, it may be never. I think my experience is if you ask people that are in a particular AI business about their own business, they will be very pessimistic, not optimistic. So if you ask something about something, some other AI vertical that they are not working on, they will be more positive. So I think my answer, the four or five years
Starting point is 00:25:59 is really based on that. I'm not working on self-driving cars, but my experience that if you collect a lot of data and people collect exponentially a lot more data today than they used to, that in four or five years, there is a high level percentage where it can accommodate everything, right?
Starting point is 00:26:20 I mean, you could even debate, can a person, can everybody or can anybody deal with any situation in a car and i would also debate that you know that's also not a hundred percent so you know you know it's a strange side effect of of these questions around self-driving cars and what are the the things that get in the way which which I didn't realize. I was talking to one of my kids the other day, and they have thrust the idea of trolley problems into common nomenclature. This was something that was a philosophical kind of nonce. And now because of AI and conversations around this, I think a lot of people understand what a trolley problem is and what it means. Just as a side note, it's kind of interesting.
Starting point is 00:27:03 Yeah, it really is. And I think that that's one of those things that I was going for with these, when I wrote these questions originally, was to try to bring up some things that would be a little challenging, but also a little general purpose here. And I think that the takeaway from that question was essentially this thing that we've just been discussing, that who can deal with everything all the time? You know, nobody, nobody. In fact, I just read an article that Tesla actually has two different models. They have a highway model and a, and a local model. And, um, and they've been trying to unify them. I'm not sure if that's true. Tesla welcome on our show anytime you're ready.
Starting point is 00:27:40 Um, but, uh, but that was, you know, that they don't even have a general purpose, self-driving model. They've got two models. I don't know. Another one that came up very frequently, which had a kind of a different answer, was how long will it take for conversational AI to pass the Turing test and fool the average person? Fred, your background is in speech. Do you remember the consensus answer for that one? So verbal conversational AI. Yeah, I might've split it in two with text to speech and speech recognition. Text to speech is almost perfect today.
Starting point is 00:28:19 So that's basically starting out with texts and pronouncing it correctly. That is very good technology and very advanced. Speech recognition is a little bit more difficult. I don't know what my answer was, but it's the problem is when you throw like an American English model at a non-native speaker, you know, everything goes out of the window but for a native speaker i don't know maybe i've said three four five years as well um maybe that's also kind of a an optimistic view but i think honestly i think speech recognition today and is is pretty good and you might you might get surprised that today they don't optimize speech recognition engines anymore.
Starting point is 00:29:05 It's they optimize the NLP. So what happens is that the speech recognition might not be as high as you want, but they use NLP to fill in the gaps. And so I presume that with the NLP technology advancing that, you know, three, four or five years from now, they will be a lot better with speech recognition with non-native speakers because the key here is really non-native speakers. Yeah, absolutely. Yeah, and that's roughly the answers that we got
Starting point is 00:29:34 was basically it's kind of not a problem really. It's the first question, this sort of, okay, a Turing test in what kind of situation and with what kind of speakers and so on. You know, if you're talking native English speakers, you can already do it. Right. If you're, and especially in a constrained environment or a constrained subject, you know.
Starting point is 00:29:55 Right. And so in the early days of speech recognition, we tried to boil the ocean, meaning we're trying to make it work for everybody. Today, what's happening is that speech recognition is being personalized. So the systems recognize you as an individual and they optimize the AI model just for you. Chris? Well, I know that was the answer.
Starting point is 00:30:15 And I agree, that was definitely the consensus answer. It was kind of like, okay, it's here now. And there's obviously caveats. But I don't know about you all, but when I try to talk to my devices, I still have a lot of challenges. Um, if there's a lot of kind of predefined things that I'm supposed to say, and if I say things other than that, or even if I say those things, um, they don't actually work for me. So just, just a caveat to that. I don't know. Um, scientifically, I think it works In the real world, my experience has been different.
Starting point is 00:30:45 Yeah, that's true. I think if you ask most people how their phone personal digital assistant, and I'm not going to say the wake word, works, I think most of them would say it's crap. But it's really not. It's really amazingly, amazingly good. And so we're going to continue the three questions for season three, but we're going to throw a twist in here. And that is that we're going to have our guests, we're going to offer the opportunity for our guests and even our listeners to pose questions that we will use
Starting point is 00:31:15 from our three questions library. So at the end of each episode, we're going to ask our guests off camera to record one or two questions, and then we're going to use their questions for a future guest. And if you who's listening want to be part of this, you can as well. Just send an email to host at utilizing-ai.com and let us know that you want to be part of this. And we would love to have you come on. You can meet Abby.
Starting point is 00:31:39 You can record a couple of questions and maybe we'll use your question on the show as well. So please do send a message for that. and maybe we'll use your question on the show as well. So please do send a message for that. So before we wrap up here, I want to talk about season three. I want to talk about what we're expecting from season three. Now, obviously, some of these things that we've talked about are going to continue. The whole idea of AI as my co-pilot, the idea of ethics and bias and making sure that AI is universal and universally available.
Starting point is 00:32:05 The idea of implementing and use cases and figuring out how to have AI work for us and also how to have AI work better. What other themes are you guys looking at here in season three? I already mentioned homomorphic encryption is one of those things I think we're going to hear more about. I'll just tell you right now, I think we're going to hear a lot more about co-locating data and processing. So Samsung, for example, just talked about having processors in memory. Obviously, we've talked with NGD and so on about, you know, processing on storage.
Starting point is 00:32:41 We've talked about processing on network cards with NVIDIA. What do you guys think? Chris, what are the big topics we're going to hear? Yeah, I agree with yours for sure. I think that that idea of what the edge means for AI applications, which to your point is bringing the data to the processing or bringing the processing to the data, is going to be a big one. I think even though we just, you know, kind of confirmed that maybe self-driving cars are further away than most people think, I think that's going to be a topic again. And especially, you know, not just self-driving cars, but maybe more along the lines of, okay, we're putting AI into enterprise applications. What does that mean for the world? And I'm not talking about the bias perspective this time.
Starting point is 00:33:21 I'm talking more about how are we going to reinvent business models and reinvent products, right? Because to me, one of the most exciting things about self-driving cars is this idea of, you know, potentially an ownerless car future where cars just kind of zip by and pick you up and take away. And what, you know, where can we see that happen in other industries is what I'm curious about. And I think we're going to start seeing that in the next season. Yeah, I would like to see more applications like BrainChip that are closer to the end user. I know we're looking at the enterprise, but if consumers start consuming AI, that means that the enterprises do have to get their act together. And it also means that a lot of the enterprises
Starting point is 00:34:05 are making breakthroughs in AI, right? So although a year might not be a long time, but from an AI perspective, people make significant advances from an AI perspective. I'm also, I mean, me personally, I would also kind of like to see and hear from the financial world and not necessarily financial people using AI but how does the financial world like VCs and
Starting point is 00:34:33 startup investments look at AI you know how what are they doing to help the market as opposed to invest money in another storage company or another server company. So that would be nice to see. I mean, I would be super interested to hear both sides of that, right? How AI has impacted the financial markets. And that's one of those areas of kind of what I like to call trickle-down technology, right? Where the folks with the most money deploy these things first, and then you kind of see how it shakes out for the rest. And I think AI definitely, in a lot of ways, was born in some of the high-frequency trading and quantitative shops and things like that, and seeing where that's come along. I don't know
Starting point is 00:35:16 if we'll get anybody who wants to talk about that. And the other side is, you know, how financiers are looking at AI and how do you price out an AI project? I mean, both those things would be super interesting. I don't, I don't know if we'll get folks to come out and talk about it publicly though. Yeah. Another area I think that I hope to get is I I've, I've been following hot chips the conference and of course, supercomputing is coming up. And at both of those conferences, we've often seen some of the latest and greatest stuff. I really would like to get some of these chip companies on here. You know, Esperanto and Cerebrus and so on. I mean, I know that we're going to have Intel on. I'm already talking to NVIDIA to come on, already talking to Marvell coming on in season three.
Starting point is 00:36:06 I'm really hoping that we'll see them. But I would love to get some of these other companies on as well. And sure. I'd love to get brain chip back on because remember brain chip promised that this thing was going to be real this fall. And so I really want to hear how the real thing works. Also, it's great to have brain chiphip on because, of course, then we get a gazillion listeners and they subscribe and they get to hear the rest of
Starting point is 00:36:31 the stuff we're talking about, which is always fun too. Another area I think that we really haven't covered at all, which is a widespread application of AI, is applying AI to multimedia. We're starting to see a lot of systems that, you know, use AI specifically to extrapolate audio and video data, whether it's as simple as sharpening photos, or upscaling graphics, or as complicated as even, you know, deep fakes and stuff like that. I would love to see where that goes and what that means to the enterprise, to the practical applications of this. Because just like how AI has affected security by allowing us to collect and process more data, which is something we've talked about again and again, back to that Splunk episode, for example, that was a big topic there. I do think that AI processing more data
Starting point is 00:37:31 and this whole concept, I mean, just take that concept of sharpening and just apply it to anything in the industry. That could be huge. And so it'll be interesting to see how AI can, for example, make fuzzy decisions sharper, which is something we hit on just in one of our last episodes of the season when we were talking about using AI in the corner office, essentially, to make decisions with Josh Epstein. That was an interesting thing. We can do things like that too. And so
Starting point is 00:38:06 we'll see, you know, where it goes. Also, you know, we're starting to see hints of massive, massive models. So one of the questions, one of the three questions we started asking was how big can AI get? And the answer is we ain't there yet. So I have a suspicion that in season three, we're going to hear about something that makes GPT-3 look like a little use case, you know, a little corner case. You know, we're going to see a, you know, zillion parameter model or something. So we'll see what happens there. So thanks, guys, for being part of this. Thank you so much for your contributions week after week.
Starting point is 00:38:42 And again, I'll just call out to the listeners. We would love to have you get involved. Please, please do find us. Utilizing-AI.com is our website. You can also find us on Twitter at utilizing underscore AI. As I said, if you want to be part of our three question series, just send an email to host at utilizing-AI.com. We would love to hear from you.
Starting point is 00:39:03 We'd love to have you be involved in that. Another thing I'd like to call out, shout out, is to our friends in the MLOps community. A few episodes ago, we talked to Demetrio Sprinkman and David Aponte from MLOps. And of course, they were part of the podcast from the very beginning in season one as well. And we're going to try to connect this into that community as well. So we're going to ask them to join us again, of course, but we're also going to try to get some more of the ML Ops people involved. And if you're interested, please do check out that community. Before we go, Chris and Frederick, where can we connect with you? What are you proud of other than utilizing AI
Starting point is 00:39:42 in the coming year? Chris? Yeah, I'm continuing to work a ton with GigaOM and doing some really cool practitioner-based industry analysis. And you can find everything about that and all the other consulting work and coaching and mentoring work I'm doing at chrisgrundeman.com or on Twitter at chrisgrundeman. And for me, it's a lot of HPC and AI with a heavy focus nowadays on data management. Many more people or enterprises have petabytes of data compared to in the past. And and getting to work with AI is becoming more and more of a, I have all that data, what do I do with it? Do I have the right data? Where do I get it from?
Starting point is 00:40:29 What do I do with it from a research perspective? What do I do with it from an inference perspective, which is now the new thing? I can be found on Twitter as Frederick V. Heron, and my consultancy company is called High Fence. It's highfence.com. And as for me, you'll find me every week on the Gestalt IT Rundown every Wednesday. We do a tech news show where we cover things like the news from hot chips and so on. That's one of the things we just recently covered. Go to gestaltit.com to find that.
Starting point is 00:40:57 And of course, you can find me on social media at sfosket. Also, we are going to have another AI Field Day event. It's not for a little while, though. It's not one of our biggest topics simply because we just don't have as many companies involved in that. But AI Field Day 3 will be April 20th through 22nd. So season three will kind of lead us into AI Field Day 3. And I would love for you to check that out. Also, please do subscribe to the podcast. It's a free podcast everywhere. You can find it on your favorite podcast apps. Thankfully, utilizing AI is now available basically everywhere. If you are on one of the apps that allows reviews, please do write a review or give us some stars. That does
Starting point is 00:41:44 help. I know that everybody says that, but I'm not kidding. It really, really does help. And also, please drop us a line. Let us know that you're listening. There are thousands of people listening to this podcast every week. And it's really helpful to us as hosts. I'll just speak for you guys too. We love hearing from people. We love having somebody say, I listened to that episode. I really enjoyed it. It really opened my eyes. That's something we've certainly heard in the past. So, you know, help our egos and reach out and tell us you're listening because we always love to hear from that. So for now, for me and for our producer, Abby here at the Gestalt IT studios, thank you for listening to Utilizing AI, season one, season two, and now season three.
Starting point is 00:42:30 And for the first time, I'm going to say this for season three, we'll be back next week with another episode of Utilizing AI. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.