Screaming in the Cloud - Solving the 20-Year S3 File System Problem with Hunter Leath

Episode Date: January 20, 2026

Hunter Leath, CEO of Archil, spent 8 years building Amazon's EFS file storage system, learning exactly why making cloud storage act like a hard drive always fails. Old programs need hard driv...es, but cloud storage doesn't work like hard drives—a problem that's existed for 20 years.Now Hunter's building Archil, which puts super-fast storage between programs and S3 so they can finally work together. Your programs think they're talking to a regular disk while your data lives safely in the cloud.Hunter explains how they're doing what others couldn't, why it costs less than Amazon's own solutions, and why file systems suddenly matter again in the AI era.Show Highlights:(01:37) What Archil Does and Why It Exists(02:26) Why Mounting S3 as a File System Has Always Failed(03:07) What Building EFS Taught Hunter(06:55) Using Fast SSDs as a Cache Layer for S3(09:45) Attaching Archil to Your Existing S3 Buckets(15:08) Why Archil Costs Less Than EBS When You Do the Math(17:56) What Happens If Amazon Builds This Feature(19:20) Competing With EBS Performance on GP3 Volumes(21:43) Raising $6.7 Million Without an AI Pitch(23:46) What Customers Get Wrong About Archil(28:07) Accessing Data Stored in Glacier Deep Archive(29:24) The Plan to Get Into the Linux Kernel (30:51) Where to Find HunterAbout Hunter Leath: Hunter is the founder and CEO of Archil, which transforms S3 buckets into infinite, local file systems that provide instant access to massive data sets. Prior to Archill, Hunter spent the last ten years in the cloud storage industry, including 8 years building Amazon's Elastic File System product and one year on Netflix's core storage team.Links:Hunter Leath on LinkedIn: https://www.linkedin.com/in/hleath/Hunter Leath on X: https://x.com/jhleath/Archil’s Website: https://archil.comSponsored by: duckbillhq.com

Transcript
Discussion (0)
Starting point is 00:00:00 That was the goal of ours was to be in a position to offer this experience like EFS without having to charge the prices that I know so many customers are frustrated with from a thing like EFS. Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined today by Hunter Leith, the CEO over at Arkhill. I get a bunch of questions periodically from folks who are building something new in the AWS ecosystem, and they want me to take a look at it. And I generally turn most of them down because I'm a skeptic. But for whatever reason, Hunter caught me on a good day, and I looked at it,
Starting point is 00:00:41 and I really liked what it was he was building. Hunter, thank you for joining. me and suffering my slings and arrows now public version. Thanks, Corey. Excited to be here. This episode is sponsored in part by my day job, Duck Bill. Do you have a horrifying AWS bill? That can mean a lot of things.
Starting point is 00:01:00 Predicting what it's going to be, determining what it should be, negotiating your next long-term contract with AWS, or just figuring out why it increasingly resembles a phone number, but nobody seems to quite know why that is. To learn more, visit duckbillhq.com. Remember, you can't duck the duck bill bill, which my CEO reliably informs me is absolutely not our slogan. So I can do a piss-week job of explaining and mispronouncing almost anything,
Starting point is 00:01:33 but in the interest of time, why don't you describe what it is your building? Sure, so our company, Arkhill, is transforming S3 buckets into infinite post-6 compatible file systems that provide instant access to massive data sets. So you can run anything directly on S3. Well, jokes on you. If you improperly mount things and start using it as a file system, then people have been doing that since S3 launch. And then they led to tears before bedtime
Starting point is 00:01:59 when request charges were levied starting in late data around a lot of that, just because it caused a lot of problem for the front-end stuff. AWS has also come out with their mount point, open-source nonsense. So, you know, after 15 years of, sorry, 20 years of telling people, oh, S3 is not a file system, don't use it that way. They trot out, and here's how to use it as a file system, because someone is trying to see if some customer somewhere is going to snap
Starting point is 00:02:24 and just go after them at reinvent. Yeah, I mean, I don't know. You may have seen a year ago we posted on the AWS subreddit talking about the solution, and almost all of the top comments were S3 is not a file system. This is a terrible idea. I would never recommend this to my customers, which I love.
Starting point is 00:02:42 I love that kind of feedback, because I think what we've done is we found a really unique approach to solve this problem that hasn't been done before by any of these fuse solutions in the past. I want to point out one other thing because this does sound to the uninitiated given things I've said before. Like I've somehow pivoted into becoming a credulous moron because a lot of people have tried to take a bite at this apple. But one thing that makes you a little bit different is you spent eight years at Amazon working on the elastic file system or EFS product. which generally you can say a lot about Amazon, they don't tend to hire stupid, and they certainly don't suffer it for the better part of the decade. You clearly understand the space.
Starting point is 00:03:24 You've worked in it. You know where the bodies are more or less buried here, and you think you see something here. That alone gets my interest and my attention more so than anyone starting it. Like, well, we're going to build this thing in a product space. We've never actually worked in. How hard could it really be? And for some reason, that pitch, if it uses the word AI and it somewhere raises a $30 million seat round. Yeah, I mean, I think that's right.
Starting point is 00:03:48 And I really enjoyed my time at AWS on EFS, and it taught me all kinds of things about running at-scale storage solutions. I think I went home every day for eight years thinking about the problems associated with EFS and performance and cost and how to optimize storage placement and eventually came to this really. that what EFS does as this scale-out file system that you can dump things in is probably the right shape for a lot of customers if we could add that synchronization to S3 and the performance that they expect, which we think that we can bring to the table. And so rather than being just a fused library, like you might see S3FS or goofy FS from the past. You're going through user space in those cases and the latency hit is wildly unacceptable for an awful lot of things. That's right. We are running.
Starting point is 00:04:40 running this middle layer of NVME devices, which allows us to provide SSD-like, EBS-like access to the data that's already there for our customers, so they get a file system-like experience. This can be done. I want to be clear. That back in 2008, at Reach Local, we ran the entire site on a MySQL database that lived on a NetApp volume. This can be done over NFS. There's a lot of dialing in and tweaking and care you have to take. But if we were able to do it back then, this isn't knowledge of the ancients that has somehow been lost. You can run these highly critical transactional workloads on something over a wire. That is no longer in doubt.
Starting point is 00:05:22 I found that when EFS launched it had some challenges. I was very vocal about them. Talked to several people on the team. As I recall, they were very large at Reinvent, had no neck. And I thought, well, this is the end of me. But the question was, tell us more about the year. use case, and over time, it's become one of my favorite services. For a while, I was running my home directory on my EC2 box out of EFS.
Starting point is 00:05:43 I had to stop doing that because it took forever to instantiate a new shell system. It turns out that every time it read my ZSH profile, it was making 1,500 discrete read operations against disk, and maybe that's not the best use case over the wire, but that's also 20 years of Legacy Croft in my case that I had to worry about. So this is in AWS. This is living on-prem. This is in other cloud environments. Where is this thing based?
Starting point is 00:06:10 You said S-3, so I have some guesses. We today have shared clusters that we sell to customers, similar to how S-3 or EFS or any of these services work in a couple of regions in AWS and a single region in GCP. We're also working with customers for some on-premise deployments. But we want to be where the compute is that uses the data. to reduce that time on the wire. The latency issue for anything that is being used by, as you said, presents as an NVME device.
Starting point is 00:06:39 That means you've got to be hooking the kernel somehow locally on the system. And there has to be a cache. You can't be going to S3 for every read and every right. The request charges alone would make this a complete financial non-starter. What's the overall architecture look like? Yeah. So what we do is we're running a fleet of servers that have the SSD instances attached to them. similar to, there's like a lot in the news about planet scale, planet scale metal,
Starting point is 00:07:05 a very similar thing that we're doing here. And those servers provide this distributed durable cache for data that's going into or out of S3. And what that allows us to do is provide a very easy on-ramp for people who want to use S-3 as a file system or use it as their backing store, but then also give customers this ability to tell us, well, maybe we actually only want this data to be stored on SSDs for performance or cost reasons. And we give that flexibility to our users. Now, the right way a lot of these workloads have been done is you teach your workloads how to use ObjectStore and teach you those semantics. And it grabs whatever it needs and life goes on.
Starting point is 00:07:48 That is a wonderful end state, but there's a universe of applications out there that flat out are not built that way. where, oh, just teach your 30-year-old application, how object store works. Oh, terrific. After you, please. If it's not that hard, I'd like you to meet my mainframe. There are a bunch of legacy workloads I can see where this is incredibly helpful. Do you see modern workloads taking advantage as well?
Starting point is 00:08:10 I agree. I think it's a combination. And I've had conversations with people in the past where I kind of like correct the legacy term because in many of these fields, it's not even that these applications are necessarily old. Like if you look at the HPC and academic fields, these are new things that are being written that are the best in class for what they do, but are built against file systems because most universities and labs are running on some kind of shared cluster mainframe thing with a file system sitting on the side.
Starting point is 00:08:43 And so I agree that we provide an opportunity to take this vast library of software that exists today and actually connect it to the cloud in a cloud native way so that the data could be used by a Lambda from someone on a different team. But I also think that we are going to see a resurgence in applications that are built in a modern way for file systems. And my belief is that people have shifted development to using object storage natively because nothing like Arkhill has existed. And traditionally, the limits of the file system have been so painful for people. If you've run NFS share and run out of space or run out of iops or gotten paged at 4 a.m, it leaves this bad taste in your mouth about the entire technology where it turns out we can
Starting point is 00:09:39 build file systems that scale in a very similar way to S3 and provide a very similar experience. The S3 bucket that's backing this has to be, have the data laid down in it in a particular format that you folks work with, or it can be hooked up to existing large datasets? It's the latter, and that's what we're super excited about, is that if you are a user who has an immense quantity of geospatial data or video data or any of these data sets that you might need to process, we work with that data directly, and we synchronize in both directions. So anything that you put into our file system then gets replicated into S3 and its original
Starting point is 00:10:16 format, too, and you get to own that data. You talk about shareable across multiple instances simultaneously for data sets. I have to imagine that there are some brace condition and concerning limits here. To use my insane example, you can't necessarily, I would think, have a MySQL database living in S3 that you then wind up pulling out, having multiple things commit transactions to that thing, and then put it back and hope for the best. But how does the fencing work? That's also a very good question.
Starting point is 00:10:45 And something that stumped us early on around how we actually actually. extend mutability to multiple instances. But we started from this premise that many customers who are running workloads in the cloud will pick something like EBS as an easy starting point to store their data and then only move to shared storage when they find out that for high availability reasons, all the data can't be attached to a single instance, for example. And we want to build almost a version of EBS that doesn't come with any performance drawbacks, but allows you to have this sharing across instances in a safe way, has the synchronization
Starting point is 00:11:25 to S3, and does the auto-scaling and the pay-per-usage that people like from services like S-3 and EFS. But ultimately, the application ends up being the one responsible for coordinating rights to that data. So we would not recommend that if you launch a MySQL database, you then attach multiple instances to that database and write to it simultaneously, because MySQL isn't designed for that. Well, not with that attitude. Sorry, please continue. I misuse databases because I have problems that I make everyone else's. You can do it once, right? Like many things, you can do it one time. It worked in my laptop, so off to production with it. Why not?
Starting point is 00:12:07 That's why I call my staging environment theory. Works in theory, not in production. But for many workloads that are maybe producing files or like transcoding videos and have a multi-stage pipeline where different instances are handing off that video file, being able to share that data across instances is very helpful. Oh, I can absolutely see where that makes sense.
Starting point is 00:12:26 The pricing, at least as on your website, is also compelling. Does the data live in the user's S3 bucket? Does it live in yours? Where does that live, for lack of a better term? Yeah, so we attach to customers S3 buckets directly. So we use the data that's already in their bucket. We synchronize any changes back to their bucket and they get to keep that
Starting point is 00:12:48 data. We're working to simplify onboarding so customers can try us without maybe making an S3 bucket or attaching us to their production S3 bucket. And so we're actually launching the ability to use a bucket that we manage so that you don't have to worry about any of that if you're not coming in with data. Something that strikes me about this is your pricing. The first thing I always look for on a website is the pricing page. Because that tells you. me, who is this actually for? And you generally want two things. One is the get started right now because it's two in the morning and I have a problem. And if it's call here to contact us, you're not going to space today. And the other is the enterprise end of it, which is contact us. We don't
Starting point is 00:13:29 put our data with random websites because we are a Fortune 500 and strive to act like one. And whatever's in between them almost doesn't matter. And you have two options right now. One is the enterprise option, and the other is the developer plan at 20 cents a month. And that is a very resonant figure for me, because 10 cents a month gets you single availability zone EFS at on-demand prices. And 30 cents a month gets you multi-a-Z EFS. You're smack dab in the middle of them with a better durability story than EFS itself offers. And even with having to store the original data in S3 standard as well, which is a two and a half cents, we're still talking that this is less expensive for active data than
Starting point is 00:14:15 using EFS off the shelf. Yeah, and that was the goal of ours was to be in a position to offer this experience like EFS without having to charge the prices that I know so many customers are frustrated with from a thing like EFS. And I think, too, if you, if you zoom out further and you talk to users of AWS who are not familiar with EFS, what they compare it to is, even if you pricing. And when they see 20 cents a gigabyte, it looks extremely large compared to the eight cents a gigabyte that you get for a provisioned GP3 volume. But we've actually run the math. And if you do an apples to apples comparison of taking data that was previously on EBS and moving it to our service, the pricing actually clocks in at something like 1.95 cents
Starting point is 00:15:03 per provisioned gigabyte that you would have had on EBS. Right. Because you have to over, you have to over provision in EBS volume because only a lunatic runs at 100% utilization on it. You have to monitor that and care about it. And people from the EFS side will say, well, if you're not using the data, we have intelligent tiering, which makes it a lot less per gigabyte. Yeah, you only charge per active gigabyte in the course of a month. There are, I believe, a 30-day cycle. So, yeah, you drop to zero where they drop down to, I forget the exact number.
Starting point is 00:15:34 But it's, it is compelling. The economics alone are fascinating. All you have to worry about after that is, okay, does the thing actually work for a given use case? Yeah, that's right. And it is rare, I think, to find these kinds of optimizations and infrastructure where you can both save people money and make their software faster. And we think that this is one of those examples where that's possible, where if you're moving from something like an all SSD, an EBS, or an EFS deployment, we can save you significant money on top of that existing deployment. And if you're moving from something like using S3 directly or downloading a zip file and unzipping it, we can make that workload faster because we're actually keeping some of the data on SSDs. But yes, I think we plan to do a better job over the coming months of publishing more benchmarks and more customers who are using the service to highlight the applications that work well with ARKIL so that people know that it's an easy thing to adopt.
Starting point is 00:16:34 It's a pull through cash more or less with really great performance characteristics. So a concern I have, historic, I did a piece back in 2019, 2018, about how to compete with Amazon. And one of them obviously is good developer experience because that is something that has not gone super well for them in recent years compared to a number of other folks, but also to find things that aren't on the same rails that they're innovating on. This feels like it could be a story of a feature that AWS puts. it's out at some point. And at that, if they wind up effectively fronting S3 with an EFS like cash or something like that, how does that change your competitive positioning? I mean, I have to imagine this is come up as an idea. This episode is sponsored by my own company, Duck Bill. Having trouble with
Starting point is 00:17:25 your AWS bill, perhaps it's time to renegotiate a contract with them. Maybe you're just wondering how to predict what's going on in the wide world of AWS. Well, that's where Duck Bill comes in to help. Remember, you can't duck the Duck Bill Bill, which I am reliably informed by my business partner, is absolutely not our motto. To learn more, visit DuckbillHQ.com. Honestly, Corey, I hope they do.
Starting point is 00:17:53 I have many friends still who are on the EFS service, and I would be ecstatic if they were able to build something kind of as exciting as synchronizing to S3, because I think it's important for customers. But I think that what we're building goes beyond that, which is that AWS, at least from my experience, is very focused on how to build building blocks and give them to customers. And I think even Peter DeSantis at ReInvent several years ago said something to the effect that we will not build frameworks. We want customers to build frameworks.
Starting point is 00:18:27 And so our ability to help our customers is going to be based around how we tie things tightly together so that they don't have to cobble a bunch of AWS services in order to get the same solution. And the way that we do that is through a combination of performance work. Our goal is actually not to be as fast as EFS. It's to be as fast as EBS so that customers can replace EBS volumes with us, which we think is an entirely new market segment. that Amazon won't be able to capture. When you say, I compete with EPS,
Starting point is 00:19:04 does that compete with the Bring Money I.O2 volumes as well, performance-wise? Because every time someone talks to me about those, it's like, oh, great. And everything they say after that turns into, and here's how you build your own sand from Popsicle Sticks into cloud, which great, not really what I want to do. Yeah, I think in time, we will. For now, we see us as like a very good alternative to GP3, which are where we expect the majority of workloads,
Starting point is 00:19:31 which are not transactional database workloads, running, where if you're doing video transcoding, you're probably using a GP3 volume to store that data and not like a finely tuned IOT2 volume. Oh, yeah, every time I see an Iio2 volume, I have many questions, most of which are, the answers become maybe you shouldn't be using this volume for this use case.
Starting point is 00:19:50 I also, normally I tend to be relatively down on the idea of AWS trying to compete with someone else who's making money somewhere because their developer experience is crappy. But the lock-in halo effect of the ecosystem does become challenging, especially if we take a look at something on the storage side, as an easy example, where staying first party, even if it's crappier from an experience, is a lot more justifiable for folks than bringing something into critical path with something as fundamental as disk access.
Starting point is 00:20:21 You had mentioned to me at some point previously that, The file system is not the place where you want to be betting on new technology. And I think that's absolutely true. You know, these things are not developed frequently because there is so much thought and care and safety that needs to go into these critical components. And so I think that as a business, we're going to have to build our way up to some of the enterprises that would otherwise be happy to pick an AWS solution. But if you look at just the landscape of storage workloads that are out there,
Starting point is 00:20:59 there are lots of startups who are interested or need to capture these marginal cost savings or marginal performance savings in order to win in their product category where we can start. And there are also lots of enterprises who have enormous shared caches for things like Docker images or CICD artifacts where if things did go wrong, it's not a huge loss for them. And so being able to build our customer base from these easier to win people who are more open to new technology in this place workloads and work our way up to an established member of the storage community like a ZFS or an XFS is the path that I think we're going to take. One question I have because you have you announced at the top of your website that you've raised a $6.7 million seed round. Congratulations. But what I don't see on this, at least above the fold, and as I scroll down, I don't see it here either.
Starting point is 00:21:55 I see no mention of AI. So first, is that legal toward the end of 2025 to raise money without an AI story, front and center, sucking all the oxygen out of the room? So what is your AI story? Yeah, Corey, I can't believe you've brought this up, and my investors are going to find out that we've not placed AI above the fold. I think that what is exciting about our technology is that it is so horizontal and broad that it's applicable to all of these workloads that exist today.
Starting point is 00:22:27 Like we talked about video transcoding, CICD, geospatial, these things that were in the world prior to 2025. But I also think that there's a lot of interesting things happening in the AI space that's going to intersect with us. And what I hear most frequently from customers is that the, These models are being trained on these data sets that include an immense amount of Unix tool usage and file system usage. And so there are many companies out there that are trying to connect LLMs to, you know, Slack or Salesforce through these purpose-built tools. But the models actually perform better if you're able to expose data in a file system
Starting point is 00:23:10 that the model can just grep. Oh, yeah. The command line tools are the best. MCP out there. It can do basically anything you need it to do, and that knowledge of how to use these things is baked into the foundation model itself. And so I think it's my hope that we can become, as the file system was originally intended to be, this universal access layer for data, no matter where it lives, and then allowing AI models, applications, agents, what have you, to use our system to access that data is going to allow them to be more efficient and more productive
Starting point is 00:23:44 than they otherwise would be. When you tell the story of what you're building to folks, what are some of those common misunderstandings, you see? Our product is complex, as you can probably tell from the conversation that we've had. It's one of those things that feels extraordinarily simple and never is. So we have folks who come in and aren't sure if we're here to save them money on their S3 bill,
Starting point is 00:24:08 which would be great, but of course we are not. And they're not sure necessarily... Well, technically, if you're treating it like, like a file system and just hammering the crap out of request. Yes, yes, we will. But you're doing something wrong. That's right up there with help cut your AWS by, Bill, by 98% by no longer checking your credentials into GitHub. Yeah, exactly. And so I think that we have this narrow focus today on how we can help people who are either working in the AI space, like we talked about, or have these applications which are built around file system semantics and connecting those
Starting point is 00:24:44 applications to a data lake or downstream object storage native systems where we will win ultimately. I think that that is probably a user education problem, which is kind of a testament to how reliable storage has gotten across the board. This is something that it takes a relatively narrow subset of the industry to actually even think or care about these things. but those who do really care a lot about it. It's also kind of wild that we watched EC2 alone get to a point where we can trust it now for things of this level of latency requirement as well as durability.
Starting point is 00:25:24 The world has changed. Yeah, and there's like a tremendous amount of engineering effort and work that goes into making EC2 a durable service, for sure. But it is funny that you say that about people caring. Because a week or two ago, I think I reached out to the CTO of a company doing $200, $300,000, $300 million in annual revenue. And he picked up the phone immediately and told me that this is a problem that he's been thinking about for six years. Because for some segment of storage workloads, they are just woefully underserved by the offerings today, whether it's an Amazon offering or a pure, Weka, vast, net app. None of these things have the right latency and throughput characteristics for all workloads.
Starting point is 00:26:10 And that's why we see such a vast array of storage options that you can buy. I can't find it at the moment. But the EFS site used to say with their intelligent tiering offering, what percentage of data was there, but not accessed in the course of the calendar month. And it was something north of 80% in the common case, which is effectively some of the best marketing you could do for this. The challenge is is great, but I do need to access some of it. I never really know which one, and it has to be capable of being reached via file system semantic. Well, this sounds like a relatively great way to get started.
Starting point is 00:26:45 It does not sound from the Try Here to Get Started documentation you have that it's a particularly heavy lift to integrate into a test environment. This is, it's weird. I didn't expect to see a lot of innovation in the block storage world in the year of our Lord 2025. And yet, here we are. Yeah, and that's what I go around telling people as funny is that we're in San Francisco doing a startup in 2025 and we're building a file system. Sorry, it's a file level semantics and not block level semantics. It feels like it's right on the cusp. Yes.
Starting point is 00:27:18 I think that you are correct in that one of these learnings that I had being focused on EFS and talking to customers is that many people don't view the file system as a place where they want their data to live long term, or they don't like the idea of having to pick if I'm going to store this data on a Netapp file system forever and lock it away from the rest of the ecosystem that's evolving around object storage and the cloud. And really, what we believe people want is this ability to use the file system to access that data. And whenever they're using the data, it makes sense to use it through a file system. But when they're not using it, they want it to just live in S3, so it can be like any other piece of data that they're keeping track of.
Starting point is 00:28:07 I do have to ask, what is the response if a piece of data lives in S3 via intelligent tiering, for example, and it's been aged out into some of the archived years, where it will take some time off and measured with the calendar in order to come back? Does it just blow itself the chunks? Does it just read a file read error in some way that gets surfaced to the file system that this is not a catastrophic event, but try later? Yeah, today what we do is we just basically wait, which obviously isn't ideal for a lot of applications,
Starting point is 00:28:39 but I think this is one of those give and takes with file systems that we mentioned earlier. There's not an easy way for us to signal via Linux that some piece of data is maybe there, but not immediately accessible. And obviously, I think we're going to spend time with our customers to understand as we hit people who are using these Glacier Deep Archive kind of storage classes,
Starting point is 00:29:01 what kind of experience they would prefer, and then surface that to them in a usable way. Do you present as a custom file system? Are you presenting as EXD, something else? We're a custom file system. We're based on, we run infused right now, but we're working our way into the kernel over time. Excellent.
Starting point is 00:29:17 Via module or do you have dreams of one day being manned? I'm absolutely. I think that we should be mainlined. I think that we see development as this iterative step towards stability. And what Fuse offers us is an ability to develop very rapidly and try different things because we're not messing with the kernel. That's going to eventually stabilize and we'll move to a module. And then hopefully, as that stabilizes further and the Linux community becomes used to us,
Starting point is 00:29:47 then we would love to be mainlined. I imagine you would have to open source a fair bit of that. It's like, well, except for this file system thing, that's a giant binary blob. Yeah, I see people having apoplectic fits already just at the notion. Yeah, and we'll absolutely, like, when we get to the point where it makes sense for us to be in the mainline kernel, we will absolutely open source the client. I look forward to it. I think this is going to be an interesting company to watch, a different, interesting space to watch,
Starting point is 00:30:12 which, again, I did not see myself saying at this late date. It is very interesting, I think, how much innovation is going to happen in the storage. layer in the next 10 years, and I hope that we're an important part of that. But you'll see, with time, a lot of these AI workloads and the massive data centers of inference and training that OpenAI and Corrieve and Lambda Labs are building out, that people desperately need a faster, better way to get data into these GPUs. And so I expect that we will not be the only people in this space in the years to come. I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to go?
Starting point is 00:30:54 So the website isarkill.com, A-R-C-H-I-L. And then you can also find us on Twitter at Arkell Data or myself at J-H-L-E-A-T-H-L-E-A-T-H on Twitter. And we will, of course, put links to that in the show notes. Thank you so much for your time today. I appreciate it. Thank you, Corey. Hunter Leif, CEO at Arkell. I'm cloud economist Corey Quinn, and this is screaming in the cloud.
Starting point is 00:31:18 If you've enjoyed this podcast, please. We have a five-star review on your podcast platform of choice. Whereas if you hated this podcast, please, we have a five-star review on your podcast platform of choice, along with escaping comment, tearing apart the idea of this new file system without bothering to mention that your favorite file system was ripped out of the kernel
Starting point is 00:31:37 after its creator killed his wife.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.