Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 06x12: Reflecting on a Half-Year of AI Innovation

Episode Date: May 6, 2024

We knew that 2024 would be the year of AI right from the start, but this season of the podcast has seen incredible development and change. This final episode of Utilizing Tech Season 6 features hosts ...Frederic Van Haren, Allyson Klein, and Stephen Foskett discussing the current state of AI infrastructure half-way through 2024. In addition to AI Field Day, we experienced NVIDIA GTC and numerous product introductions over the last few months. It's truly an ecosystem play now, with every company showing how well they can partner to build AI infrastructure. At the same time, a few superusers of AI are responsible for the basic models, including Google, Amazon, and Microsoft, and of course OpenAI and the other dedicated generative AI firms. The key to bringing this to the enterprise market is transfer learning, which will see a few base models tuned and trained for specific use cases. This season saw a range of guests discussing storage, data platforms, connectivity, and application development, and every one is focused on delivering practical AI solutions in the enterprise. Hosts: Stephen Foskett: https://www.linkedin.com/in/sfoskett/ Allyson Klein: https://www.linkedin.com/in/allysonklein/ Frederic Van Haren: https://www.linkedin.com/in/fredericvharen/ Follow Utilizing Tech Website: https://www.UtilizingTech.com/ X/Twitter: https://www.twitter.com/UtilizingTech Tech Field Day Website: https://www.TechFieldDay.com LinkedIn: https://www.LinkedIn.com/company/Tech-Field-Day X/Twitter: https://www.Twitter.com/TechFieldDay Tags: #UtilizingAI, #UtilizingTech, #YearofAI, @UtilizingTech, @TechFieldDay, @TheFuturumGroup, @SFoskett, @FredericVHaren, @TechAllyson,

Transcript
Discussion (0)
Starting point is 00:00:00 We knew that 2024 would be the year of AI right from the start, but this season of the podcast has seen incredible development and change. This final episode of Utilizing Tech Season 6 features hosts Frederick Van Haren and Allison Klein, as well as myself, discussing the current state of AI infrastructure halfway through 2024. Who knows where it's going to be at the end of the year? Welcome to Utilizing Tech, the podcast about emerging technology from
Starting point is 00:00:28 Tech Field Day, part of the Futurum Group. This season of Utilizing Tech is returning to the topic of artificial intelligence, and we've been exploring the practical applications, the practical infrastructure, and the impact of AI on technological innovations in enterprise IT. I'm your host, Stephen Foskett, organizer of the Tech Field Day event series, including AI Field Day, which happened at the beginning of this season. And joining me for this season finale are both of the co-hosts from the season, Mr. Frederick Van Haren and the fabulous Alison Klein. Welcome to the show.
Starting point is 00:01:02 Thank you so much, Stephen. It's been a pleasure to be on all season long. It's great to be here and looking forward to the next season. So thanks for joining us all season long. I mean, I actually would want to start with just saying that to you. It is a real pleasure to have you guys on the show. And I cannot say how much I appreciate giving your time and your energy and your focus to this. So thank you so much for being part of this.
Starting point is 00:01:32 You know, one thing that I know for certain is that with the rate of change of technology right now, having conversations on your podcast even with some of the brightest in the industry is a fantastic way to keep up with what's happening across this vast landscape of AI. It's wonderful. I think that's one of the things that's, for me, I don't want it to sound all hypefalutin, but as, you know, being a content creator has always been a way for me for learning more than sharing. You know what I mean? I don't want to make it sound like I'm not doing it so that people can look at it or read it or whatever, but I'm really not. I'm really doing it because it's a great way to force myself to keep learning.
Starting point is 00:02:14 So during this season, which has lasted basically the first half of 2024, we have obviously seen a lot of stuff happen in the enterprise AI space and the AI space generally. We kicked things off with AI Field Day, which was probably the biggest tech field day event of the last few years. And we had Intel presenting. We had all these other companies joining us, talking about all sorts of aspects of AI. And then I know that the two of you also went to NVIDIA GTC, which I guess somebody called AI Woodstock. Tell us a little bit about, we didn't get a chance to discuss that. Tell us a little bit about what you saw and what you thought about GTC. Fred, why don't we start
Starting point is 00:02:59 with you? Yeah, I think, well, first of all, it was great to be GTC face-to-face again since COVID. I did find a lot more conversations around new technologies like generative AI. Also, a more interesting view on AI software, meaning not necessarily always about the GPUs and the hardware, but also the software components to make everything work nicely with the hardware. And then secondly, use cases that I didn't think of before and that have now been pushed forward at GTC. So I think overall, very good, very busy. I always learn a lot. And I definitely like your reference to Woodstock
Starting point is 00:03:50 because it did feel indeed a little bit like the Woodstock of AI. I think that I had the fortune to go as well. And I think that what was interesting is the dynamism at that show and how companies who were participating in GTC were so keen to position themselves as close to NVIDIA. It reminded me of circa 2014 Intel Developer Forum, which I was a part of at the time, where you could tell that the foundational technology that was propelling this industry forward, that case was the CPU for this particular technology and this particular era. It's the GPU and what Jensen and the team at NVIDIA are driving. something about not just how important GPUs are to this technology transition, but also
Starting point is 00:04:50 in all of the stories that were coming out, how so many layers of the stack, so many parts of that data pipeline are being reimagined. And it's just a foundational change to the way computing will be done in the future. I agree with you so much on that. It is absolutely foundational. I think everybody can see it, even normal people, like not people in the industry, even like, you know, the average person on the street can see that AI is changing so much. I think that as we talked about in the first episode, the power of generative AI really hit last year, and it really hit home with a lot of people when they realized that this tool could convincingly emulate human writing and speech and so on. But those of us in the industry, I think, are equally amazed by the incredible advances that we've seen to the technology that supports this, the foundations of
Starting point is 00:05:54 this. I mean, as you say, in the fall, we saw announcements from Amazon, from Intel, from AMD, from all these companies with new hardware. And then along comes NVIDIA at GTC and they say, boom, you know, here's this massive cluster. You know, here's, you know, our next generation Grace super chips. And you can put instead of, you know, a few of them together, you can put a ton of them together. And like you said, it seems like every company in the industry is bending over backwards to show just how well they work in this NVIDIA dominated ecosystem. We saw that on the podcast. I mean, we had so many of these companies come on. And really, I think the recurring theme was we deserve a place at the table alongside NVIDIA, you know, and alongside companies like Supermicro and Dell and Lenovo and so on that are also out there working with lots and lots of partners.
Starting point is 00:07:00 Did you see that at the show? Yeah, I definitely saw a lot of innovation, but also more and more push away from hardware vendors. Maybe a better way to explain it is if you look a little bit at generative AI, it's a lot easier for people to understand and to start to consume, which means that people think more about use cases than they used to. And they think it's easier to achieve. And I think the second piece is AI innovation used to be driven mostly by organizations buying hardware, right? Think self-driving cars, financials, pharma, and so on. Well, today, a lot of the innovation regarding to generative AI is actually done by Google, Microsoft, OpenAI,
Starting point is 00:07:57 because they are the ones delivering the base models, right? The base models that take 90 days, 10,000 GPUs, not everybody can afford that. And so you see there's a lot of competition now, not just at the end user level, but also at the level of delivering those base models. And I think that's another way to see where the acceleration comes into point. And when we talk about acceleration, that means hardware. The software can only accelerate as much as possible. You need hardware to deliver that. And also, we believe that the hardware is fast today. The demand for hardware is a lot higher than the GPUs can deliver today. And that's
Starting point is 00:08:41 why we still need 10,000, 20,000 GPUs to build those base models. So I think overall, it's important that the technology keeps on innovating. I mean, if you look at the history, you know, once upon a time, we considered the CPU at the heart of a server and being the fastest component. Today, from an AI perspective, the CPU is almost in the way of the rest of the hardware because everything around it has accelerated network, storage, compute, all of it. And so we live now in a world where we can do a lot more, where the hardware is actually not really behind, but isn't as powerful as we want it. And that's why we need the cluster. It's a long story to say that, of course, the GPUs
Starting point is 00:09:33 become really important because once upon a time, running a workload on a single GPU was good enough. Now running a workload on 10,000 GPUs seems to be the normal behavior for people that build base models. Well, and I think that this is one of the reasons why I think between AI Field Day and GTC, I'm left with this feeling of a tale of two cities. We've got our leading innovators of technology, the Microsofts, the Googles, the Amazons of the world, out pushing the bounds of what the hardware ecosystem can deliver to get to that next large model. And then at AI Field Day, I thought there was such a great juxtaposition with one of our episodes as well, with a very typical use case of AI around farming.
Starting point is 00:10:47 And, you know, the translation of how do I actually integrate this technology into my business? And how do I do something that will be transformational and disruptive to my industry, but only may take a few systems? And this is the reality that we sit in. We exist across those two worlds, one which, to Frederick's point, needs 10,000 plus GPUs to go train the next model, or even more, I think is the case. And the other, which is trying to figure out how do I get the talent, the simplicity in the infrastructure, because I can't innovate at that rate of change. But I know that I need to stay ahead of this technology transition to avoid disruption by my competitors. And that's why this moment is so interesting to me.
Starting point is 00:11:52 It makes me wonder if what we're going to see is more of a divergence between those leading companies that you mentioned. I mean, you know, the hyperscalers, OpenAI, Anthropic, you know, these, you know, these big companies, Mistral, these companies that you keep hearing about on the one hand, and the vast majority of I'm going to call them, I guess, AI users, instead of AI developers, sort of, I mean, obviously, there's developers involved, and so on. But I wonder if we're really going to see a split in the market here, because we've traditionally seen that in the in the modern applications and web application space, where the hyperscalers were so far out and so different in what they were building and developing and putting into production that it really had almost no relevance to the enterprise. trickle down, but it's taken literally half a decade or some cases a decade in order to build, scale out microservices-based applications for things that aren't web apps and so on.
Starting point is 00:12:52 I wonder if we're really going to see that in AI as well, where basically a lot of these companies are just going to throw up their hands and say, you know what? I am not going to build out a data center full of GPUs. I'm going to build a little, you know, LLM, or I'm going to use a pre-existing one, or I'm going to build my own little generative AI model or something like that for this one little use case right here, and I'm going to run it on this little bitty infrastructure, and I'm just not even going to try.
Starting point is 00:13:22 Do we think that? Do we think that there's going to be sort of dominant gen AI models and just nothing else? Well, I do think that the key of generative AI is to rely on tools. And the number one tool in generative AI is transfer learning. And transfer learning is nothing else than basically saying, rely on somebody else's work and build on top of that, right? That's really what transfer learning is. And we always oversimplified the concept of transfer learning saying there's a model, we use transfer learning, and then we have something new. I think the reality in the future with tools, with additional tools, we'll be able to do transfer learning across multiple base models, you know, being text, audio, video, and build applications. So I think overall, what has prevented organizations from moving quickly with AI with a reasonable amount of investment, was really tools.
Starting point is 00:14:25 And so the tools for transfer learning today are extremely efficient. Look at Hugging Face. I mean, if you want to build a rack system today, you don't even need to understand it. You just need to be able to write a little bit of Python code and get off the ground. And I think that is really key because it makes it a lot easier for people, one, to start, and two, to build use cases that they never thought of.
Starting point is 00:14:55 I mean, as Alison was talking about farming, I mean, they will be able to reuse and use a lot of base models and technologies that are in open source and come up with a use case that is very, very effective. So I do think in short, generative AI, heavy focus on base models. I don't think we're going to see a lot of industries in the world building their own base model. I think we're going to see a lot of people using base models, followed by fine tuning, a lot of rank. I think that's, in my book, that's kind of the number one generative AI application in rank. And there will be more and more tools to the point where maybe it will be easy to create our own avatar, right, with tools without any kind of additional knowledge. Well, and I think that to Frederick's point, this is why it's not just a fait accompli that NVIDIA is going to own every single socket in the universe, because I think that
Starting point is 00:15:57 when you look at that application, you look at broad inference, there are a lot of use cases where other platforms and other logic solutions make sense. And one of the things that we haven't really talked about a lot, but I think we will be in the coming months, is the amount of energy that is being consumed training all of this stuff. And is it sustainable to continue to keep moving forward with the cost, you know, the insight per watt, let's just put it that way, that is being yielded from the training of these larger and larger models? I don't think that you can extrapolate that to Frederick's point to large enterprises investing in their own supercomputing clusters. Well, that's a real good point. And we actually covered some news.
Starting point is 00:16:57 There's been rumors about the Stargate project that where Microsoft is building basically a training cluster too big for any region. They couldn't put it in one state because it takes five gigawatts. That's four time-traveling DeLoreans full of power to power this one cluster. That's the future, I think. And that's the sad and strange and puzzling future that we're looking at here with these massive... I don't think that it's practical for anyone to build that except a Microsoft or a Meta or a Google or an OpenAI or somebody like that. I mean, it doesn't make any sense for them to build out these things. And you're right. Maybe that says that there specifically optimized for what you're talking about, Allison, efficiency, efficiency of processing. And they're basically putting only the instructions in there that they need for their use, for their models, for their use case.
Starting point is 00:18:22 That's smart. And I think that we're going to see a lot more of that. Right. I think one of the challenges with power is that when you look at a data pipeline of a workflow, there's a little bit of everything in it, right? There's a need for CPUs, for GPUs, FPGAs. I think the challenge today, and we can think marketing for that,
Starting point is 00:18:44 is that you can't go wrong with buying the fastest, the largest GPU you can afford, right? And buy as many as you can, while in reality, you can get away with something a lot less. Meaning that even though the power footprint is going up, I highly debate if that footprint has to go that much higher, considering people are buying a lot of hardware that maybe they don't need to consume, right? So we can see even NVIDIA pushing that message, where NVIDIA at some point only had multimedia cards, and they had enterprise cards aimed at training, Then they had inference cards. And now they're promoting these hybrid cards that can do inference and can do some training at the same time.
Starting point is 00:19:31 So power has always been a concern. If you look at Amazon, they bought a nuclear power plant. So that gives you an idea on where all of this is going. Luckily, they don't maintain it themselves. They will have people that are knowledgeable about nuclear plants doing it. But it is definitely a sign of the times. You know, I'm going to OCP Europe,
Starting point is 00:19:57 which is a great place to check in with what the largest players are demanding. And one of the topic streams at that event is fusion-powered delivered data center computing, which is interesting, that they're betting on technology, power technology that does not exist yet, to power the future of the data center. But I would like to just talk a little bit, Frederick, about that data pipeline, because we heard from so many interesting companies that are innovating, not in the logic space directly, but in how you manipulate that data pipeline to make things more accessible
Starting point is 00:20:40 and more performant. And, you know, one of the episodes that I really loved this season was our conversation with Vast Data. I think everybody knows that I'm a fan of Vast Solutions. I've been following them for years. But I thought that what they talked about was just so fascinating in terms of what they've been able to deal, deliver in terms of scale of data and just really understanding how enterprises would take in data from multiple data sets and distributed data sets and actually do something meaningful with them. I'm wondering if we think that this really disrupts the entire concept of how data storage has been viewed and how it's becoming much more than storage, it's becoming data management. I mean, storage, the storage devices by themselves are not efficient, right? You can have a very
Starting point is 00:21:41 fast low latency storage device, but it's how you use it in your data pipeline. Because one of the big problems today in data pipelines is that data is being copied or moved all the time, right? And this has nothing to do with GPUs or CPUs. It has all to do with how people put data pipelines together. And it's easy enough to say, oh, I'm just going to copy the data because it's on a different system. People want to keep things separate. So the efficiency kind of goes out the door. But I totally agree with you. This is not about storage. It's about data management, how you manage data and keep on managing data efficiently. I mean, it used to be where data was something very static or relatively static.
Starting point is 00:22:28 Today, we're talking about streaming data. How do you deal with streaming data? The problem with streaming data is that in some cases, there is some intellectual property assigned to it or some kind of regulation. So that also means that not only the data goes in, but also the data has to disappear at some point, which if you copy data and move data all the time, that is very, very challenging.
Starting point is 00:22:52 So I do think that the storage vendors out there that are looking at their solutions more as a data management concept, that those companies will be very successful. And by the way, it's not easy, right? Just like there are GPUs and CPUs and FPGAs, there is objects, there is POSIX, there is on premises, there is in the public cloud. So there's a similar amount of choices that you have to take.
Starting point is 00:23:20 And then let alone, there's also a cost perspective associated to it, right? If you're in the public cloud and you move a lot of data i mean surprisingly um a lot of organizations don't pay or pay more by moving data than actually the cost of storing that data so all of these concerns all together is data management i mean the way i look at it is that organizations that begin or start will heavily focus on the compute side of the house because that's their number one issue. But if you talk to organizations that are more mature around AI and generative AI, they will actually talk about data management.
Starting point is 00:24:02 They also understand that it's very difficult to get GPUs or FPGAs or any other accelerator, but they also understand that their efficiency and the likelihood to be successful and to be dynamic in their process, meaning efficient data pipelines, all resorts into data management. And unfortunately, it's not data management as we know it, right? It's really, it's a lot more complex than saying, I'm going to do this on POSIX and I'm going to do this in object, for example. And that reminds me of the conversation, Frederick, that we had with Hammerspace, because that's what they talked about as well, is the importance of data management and that whole data movement question that
Starting point is 00:24:46 you bring up. And then, of course, for me, Frederick, you talked about RAG. I completely agree. I think that RAG is the key to going from a toy to a tool. And again, that reminds me, it's the first episode of this that we talked to Click. We talked about the ways of integrating traditional structured data sets into, as well as unstructured data, into the AI data pipeline and the value of that. And that kept coming back too. I mean, we kept talking about retrieval augmented generation all season long.
Starting point is 00:25:28 When you think about the fact that we've recognized that the storage industry and the, let's just call it the data industry, for this sake, is absolutely transforming. We've talked about how logic is absolutely transforming and getting pushed in ways that it never has been before um the one topic that we haven't discussed but we did discuss during the series and i really enjoyed it was the ultra ethernet conversation and how we manage the interconnection of these machines in a way that gives the speed and scale and unlocks these large clusters that need technology alternatives to what NVIDIA is offering. And I think that this is another way of saying that every area of compute infrastructure is being pushed to innovate in uncomfortable paces. Ultra-Ethernet, to me, is the other example of,
Starting point is 00:26:28 yes, companies want options of how they're delivering this core fabric capability in the data center. Yeah, one of the big eye-opener moments of that discussion was having Dr. J. Metz point out that the Ethernet certies are way ahead of the PCI Express certies in terms of throughput and literal space, like die space. And if you look at what NVIDIA has been doing, everything they do uses Ethernet style instead of PCI Express style. And yet, you know, as we did, we did a whole season on CXL and PCI Express. You know,
Starting point is 00:27:14 I think if you had asked me without having talked to him, I would have been like, oh, PCI Express all the way, but I'm not so sure. And then, and as you mentioned too, both of you talked about these massive clusters. I mean, you need a networking paradigm to connect those clusters and that's Ethernet, right? Yeah, I think Alison is right on. It's people want options, right? And so if you don't know much about AI
Starting point is 00:27:38 and you ask NVIDIA about what we do for networking, I mean, the dominant message is InfiniBand externally and NVLink internally between GPUs. And I do think that Ethernet has value, just like the data pipeline, you need to have options. You don't always need the lowest latency with InfiniBand. And, you know, it's revenge of Ethernet, I guess, where you could use Ethernet in a lot of cases where InfiniBand is not necessarily needed. And surprisingly, a lot of organizations I talk to are interested in Ethernet and are willing to deal with the whole latency issue for the simple reason that they have a lot of network engineers from their traditional side of the business who know the Ethernet tools and so they don't really want to re-educate those network people on in InfiniBand tools and I I do think that Ethernet has one advantage there is
Starting point is 00:28:41 that all of the tools out there uh based on ethernet are still valid today even with ultra ethernet and and it's great to have options right i'm all for options and and competition and i think ethernet is a is a great uh counter for infinity band i think this is just such an interesting area of tech because it becomes very religious really quickly. I had the opportunity of working on InfiniBand in the previous millennia to give you a sense of how long I've been in the industry. where I think we will see a lot more headed into the second half of 2024. And when they made the announcement of UltraEthernet and you saw the companies behind it, all of the major cloud players, all of the major infrastructure players, there is just a desire to brute force make this work.
Starting point is 00:29:42 And I think, Frederick, it's to your point. We understand Ethernet. We have engineers that can work with it. And Stephen, to your point, it has a fantastic eye chart. So I'm really looking forward to seeing what solutions start coming out featuring this spec and how it starts competing with InfiniBand for those largest of clusters. And even if, that's the thing, is that even if Ethernet isn't what's being used, NVLink and InfiniBand both use Ethernet CIRDES. And so, you know, Ethernet wins for losing, you know, the only, you know, it's nuts that we're kind of at this stage where all these protocols are combining too.
Starting point is 00:30:32 So we also had some conversations this season with folks who are talking about basically using AI and how AI is changing the way that they work. We talked to Kami Waza about basically making AI possible in the enterprise. We talked to Paul Nashawati about the importance of AI to application development. We talked to Chris Grundemann, our own friend Chris Grundemann, about how AI is a co-pilot, essentially, and AI helps people to do more better. Ultimately, I think that's the question. I mean, this season focused more on the infrastructure, but ultimately, it's all about making it practical, right? Right. Yes, I agree. I mean, there's no lack of people using the word co-pilot today. I think co-pilots today are easy ways to consume generative AI. I think from a code generation perspective, it's very effective for code bases
Starting point is 00:31:39 or code needs for which there was already a lot of code written. For example, if you want to create a website or anything like that, it's very, very easy to consume and to use. I'm not sure how reliable the code is. If you do larger projects, I definitely know that it would be very useful if you could have generative AI help with debugging and documenting people's code. One of the difficulties still today for us humans is to read code written by a human or by a machine.
Starting point is 00:32:18 Unfortunately, I mean, I've had so many times developers tell me my code is my documentation, which, of course, you know, not really very helpful. But I do think we will get to a point where a lot of coding or at least basic coding can be done by generative AI. That's a really interesting area. And you consider that and you think, what does that free up for IT in terms of their investment in human talent? And when we've seen these opportunities in the past, it's not like these jobs will go away. They will be reapplied to different areas. And so how does that evolve is going to be something that's really interesting to me. How do they use the opportunity to refocus developers into adding more value?
Starting point is 00:33:13 Because as you said, it's just a tool. You know, we've just got new tools to put on the development of applications and the evolution of applications, it means that our developer folks can actually be much more effective at driving business value in my mind with these LLM tools at their side. Yeah, so we also had a conversation with David Cantor from ML Commons and I think that's also an interesting way to see where there's demand for performance and tuning is traditionally ML Perth was always on the training sides now we do see a lot more demand for the inference sides which is really a market perspective where we see people not only focused on building the models, but to consume them, which is really very, very important for the adoption of AI in general.
Starting point is 00:34:16 And David also mentioned a little bit about IoT and Edge. So I think that also kind of shows a little bit of maturity of where the technology is being used and being consumed. Yeah, I do love what ML Commons is doing because their whole theory is that, you know, benchmarks, you know, you shouldn't have just synthetic benchmarks. You should have practical benchmarks that show what these things are useful for. And you should do them in a way, you should organize your benchmarks in a way that makes the companies want to participate. And I think that that's really what's happening there. So we're seeing so many companies participating
Starting point is 00:34:52 in ML Commons benchmarks. We're seeing so many more practical benchmarks, like you mentioned, Frederick. I mean, they've got a storage benchmark, a storage for AI benchmark. They've got edge and heck, they've got benchmarks for mobile devices. I mean, it's really cool to see where they're headed there. So where do we go next, Stephen?
Starting point is 00:35:11 Well, as we've talked about, you know, we talked with Dr. Bob Suter about quantum computing and the impact of that on AI. That's going to be real interesting. One of the things I'm going to be keeping AI out on are alternative processors, alternative, you know, I talked recently with, not on the show, but with companies that are making analog AI chips. That's very cool. We've talked a lot about tensor processors that are being integrated all over the place, all up and down the stack. You know, we heard from Intel about doing AI inferencing on regular CPUs. So I think there's a lot of cool stuff happening there. And then the other thing is this season we talked with Solidigm about the importance of storage and AI data infrastructure.
Starting point is 00:36:08 Well, Solidigm was so impressed by what we're doing here with Utilizing Tech that they're actually going to be sponsoring a season of Utilizing Tech. So we're going to be launching an AI data infrastructure-focused season of this show next. So in a few weeks from now, we're going to be premiering that. We're going to be talking to companies. And there again, they know what's going on here. It's all going to be basically brought to you with different partners in different spaces, talking about different ways that the whole stack can be affected by this one component.
Starting point is 00:36:42 So instead of just basically spending eight episodes talking storage, we're going to spend eight episodes talking about all the ways that the stack is affected and talking to a lot of different companies. So I'm pretty excited with that. As you guys know, utilizing tech, we talk about all sorts of different technologies. It's the year of AI. It's the decade of AI, maybe. I don't know. But theoretically, every season could be utilizing AI. But maybe not.
Starting point is 00:37:10 And so it's going to be interesting to see where we go next. I'm kind of thinking maybe we want to go back to edge. Heck, maybe we want to focus on quantum computing. Who knows? Maybe we're going to have a breakthrough there. Maybe we want to talk communications networks or something. So I can't say enough. Thank you so much, Allison and Frederick.
Starting point is 00:37:29 Your time, your thoughts, your input in these is just so valuable. I really appreciate having you here. And we will be revisiting this, and I'm sure that we will do another AI season and that you guys will be part of it. And we'll go in a new direction. We'll see where things are. I don't know about you, but I guarantee that things are going to be real different next year at this time. And AI is going to be even more influential. What do you think?
Starting point is 00:37:58 What's your, I guess, summary of the season? Allison? I used to do thoughts on what's going to happen in a year's time. I now have condensed that to three to six months because there's so much disruptive innovation coming out that it reshapes. Every time I have a conversation, I'm reshaping my view of where the industry is going and where technology is going. You know, there's a few key things that I'm watching. I want to see how the semiconductor landscape shapes up. There's so many interesting startups that are inventing technologies right now, both for the data center and for edge, both for training and for inference, that I think that
Starting point is 00:38:40 that landscape will change and morph over the next six to 12 months. And we're going to see some interesting platforms coming into the market. And, you know, you alluded to it earlier in the episode, all of the cloud players with their own silicon and how that shapes and forms the next things that they do. The second thing is, you know, I am fascinated by data management right now. I want to understand how these new platforms change, how enterprises look at the way that they manage their distributed data. You know, we've talked for years of just data organization is a challenge for these large multinational corporations. What happens with these new tools? And what does that mean in terms of the monetization of data, which I think has always been something that a lot of organizations
Starting point is 00:39:30 have struggled with. And I think that these new capabilities will give them new paths to monetization. I'm really interested in that. So yes, thank you so much for the opportunity to be part of this conversation this season. And I can't wait for the future. Yeah, from my perspective, I mean, I'm very interested in what's happening on the hardware side and the software side. I think what I learned this season, again, is there is always something faster, something better, something more expensive, I guess. And that the technology leapfrogs each other, right? So we talk about storage, compute, and network. You know, once somebody has something faster on the compute, then the network people and the storage people have to kind of get their act together
Starting point is 00:40:16 to keep up, so to speak. The second piece is the ecosystem. And so why do I bring up ecosystem? It's because when you look at an end user, they don't necessarily look at hardware as their main, main requirement, they start from a software perspective, you know, consider a workflow or data pipeline, and the hardware just flows into this. And, and so ecosystem is what I what I expect more organizations to ask for
Starting point is 00:40:45 than pure hardware being network or being specific about the ethernet or InfiniBand, but more about a template. And then finally data management, right? We spent a lot of time with organizations with data management. A lot of storage vendors are still heavily focused on feeds and speeds, if you wish, as opposed to really helping customers.
Starting point is 00:41:12 Some vendors are doing a much better job. The reality is that end users and enterprises have different types of storage. So it means that they have to have a solution that works across the board, which also means that the storage vendors have to get together and agree on how to do certain things. Nobody really wants to reinvent the wheel.
Starting point is 00:41:36 So I think overall that was a great season, learned a lot as always. And I'm also very grateful to be able to participate in having these great conversations. Well, thank you so much. And people are going to miss you guys. So before we go, Frederik, where can we continue this conversation? Where can people find you? Yeah, you can find me on LinkedIn as Frederik V. Heron or on my website, highfence.com, which is H-I-G-H-F-E-N-S.com. You can always find me at Alison Klein at LinkedIn, but please check out all of the publications and podcasts that I'm publishing myself on the Tech Arena, which is thetecharena.net. Absolutely. I love what you're doing on the tech arena which is the tech arena dot net absolutely i love what you're doing on the tech arena 100 you check it out check out check out hyphens check out tech arena
Starting point is 00:42:33 uh also as i said check out uh utilizing tech so we will be back uh with season seven we're actually going to have a teaser episode coming up after this, kind of an episode zero of the next season where we'll talk about what we're going to talk about next. And who knows where we'll go after that. So thank you very much for that. Thank you for joining us this whole season. It's been a lot of fun to do another season of Utilizing AI. This is Utilizing Tech is the podcast series overall. You can find us in your favorite podcast application.
Starting point is 00:43:06 You'll also find us on YouTube. If you enjoyed this conversation, please do consider leading us a rating, give us a review. Also, we would love to hear from you. We would love to hear what you thought of this season. This podcast is brought to you by Tech Field Day, a home of IT experts from across the enterprise,
Starting point is 00:43:22 which is now part of the Futurum Group. For show notes and more episodes, head over to our dedicated enterprise, which is now part of the Futurum Group. For show notes and more episodes, head over to our dedicated website, which is utilizingtech.com, or find us on X Twitter and Mastodon at Utilizing Tech. We'll be announcing the new season. We'll be announcing the episodes. You can tune in there and interact with us there too.
Starting point is 00:43:39 Tune in next week. There's going to be a tease of season seven, and then we're going to launch the season just a few weeks after that. Thanks for listening, and we will catch you next week. There's going to be a tease of Season 7, and then we're going to launch the season just a few weeks after that. Thanks for listening, and we will catch you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.