The Good Tech Companies - Democratizing AI: How IO.NET's CTO is Building the 'Airbnb of GPUs'"

Episode Date: December 18, 2024

This story was originally published on HackerNoon at: https://hackernoon.com/democratizing-ai-how-ionets-cto-is-building-the-airbnb-of-gpus. IO.NET is building a platfor...m that could democratize access to AI computing resources while reducing costs by up to 75% compared to traditional providers. Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #io.net, #io.net-news, #io.net-announcement, #blockchain, #dlt, #cryptocurrency, #good-company, #ai, and more. This story was written by: @ishanpandey. Learn more about this writer by checking @ishanpandey's about page, and for more stories, please visit hackernoon.com. IO.NET is building a platform that could democratize access to AI computing resources while reducing costs by up to 75% compared to traditional providers.

Transcript
Discussion (0)
Starting point is 00:00:00 This audio is presented by Hacker Noon, where anyone can learn anything about any technology. Democratizing AI. How I O. NET's CTO is building the Airbnb of GPSs, by Ashan Pandey. The artificial intelligence boom has created an unprecedented demand for GPU computing power, but access remains concentrated among a few major cloud providers. EO, NET, a startup focusing on decentralized GPU infrastructure, aims to change this dynamic by creating what its leaders call the Airbnb of GPS. In this exclusive interview, Gaurav, IO, Net's CTO and former Binance technical leader,
Starting point is 00:00:39 discusses how the company is building a platform that could democratize access to AI computing resources while reducing costs by up to 75% compared to traditional providers. Ashan. Welcome to our Behind the Startup series. Please tell us about yourself, your journey, and what inspired you to join IO. Net. Gaurav. My journey has been quite straightforward, starting as a software engineer in Pune. I worked at several startups there before moving to Bangalore, where I joined HPR&D and helped build their network file system from scratch. At Amazon, I worked on their publishing pipeline for Android apps, e-books, and Audible books.
Starting point is 00:01:17 I then moved to eBay, followed by a major OTA company in Thailand that was a market leader in Vietnam, Singapore, and Malaysia for hotel and flight bookings. I spent about five to six years in their leadership team before joining Binance, where I led the creation of a scalable platform for KYC compliance and fraud detection for over half a billion users. Throughout my career, I've worked with AI in various forms and witnessed firsthand how people struggle with accessing the computing resources they need. Ashan, tell us about your role at IO.net and what future you see for centralized computing compared to centralized architecture. Gaurav, as CTO, my main role is creating a scalable platform that makes it easy for suppliers to plug in and for consumers to use these resources.
Starting point is 00:02:02 We started with GPUs, but our vision extends beyond that. The key advantage of our decentralized approach is scalability. Traditional data centers face significant challenges when expanding to new regions. They need torrent space, hire teams, order equipment, and handle maintenance. This creates high upfront costs that eventually get passed to users. Our decentralized model allows us to scale much more efficiently by leveraging existing infrastructure. Ashan. How does your business model work compared to centralized vendors like Azure, which charge significant amounts for AI model hosting? Gaurav. We follow a model similar to Uber. While anyone can create similar software,
Starting point is 00:02:42 our advantage lies in our supply-side connections. Our team has built deep relationships with infrastructure providers worldwide, enabling us to source GPUs at competitive prices. Our prices are typically 75% lower than Amazon and Google. We offer both hourly rates and longer-term commitments of 6-9 months. We also provide managed services for startups that want to focus on their core business rather than managing infrastructure. Ashan. How's the traction been so far? Gaurav. The response has been strong.
Starting point is 00:03:15 We recently fulfilled an order for 1,500,004,090s and are close to signing deals with two Asian Web2 companies that each have over 200 million users. While we initially focused on crypto companies' Duetto R network, we're seeing increasing interest from traditional tech companies looking to save costs. Ashan. Can you explain how a decentralized training architecture would work? With decentralization, either scalability or security might be affected, how DAO reconcile this? Gaurav. It depends on how you define scalability. Let me illustrate with an example from the data center business. If you're a data center provider in North America and I need 1000 H100s in Singapore, the traditional process I
Starting point is 00:03:57 sextremely challenging. You'd need to rent space, hire a team, order GPUs, handle shipping, maintenance, and setup. This creates significant upfront costs and slow time to market, which ultimately gets passed on to users. In our decentralized model, because the inventory is distributed, we don't face these challenges. Adding capacity is as simple as connecting new providers to our platform. It's similar to how hotel availability works. Just because major chains are fully booked doesn't mean there are no rooms in a city. There's actually substantial GPU capacity available, but no one has built an Airbnb for GPUs to aggregate this inventory efficiently. A'Shaan. To understand correctly, if there's a student or gamer in Bangalore and a company in
Starting point is 00:04:42 the US with idle GPUs, they could connect through your platform, Gaurav. Exactly. Someone from Thailand or India who wants to train a specific model, whether it's an LSTM or any other type, can use these GPUs. Because it's a rental-based model, it's more economical than traditional providers. Ashan. What do you think about the race between frontier models right now? From Lama to OpenAI to Anthropic? Gaurav. It's largely speculation at this point. We've taken a significant leap forward in AI capabilities over the past couple of years. While it's unclear which company will ultimately lead the space, it could even be a Web3 player
Starting point is 00:05:21 what's certain is that we'll see tremendous innovation over the next three years. Ashan. How's IO? Net's governance model structured right now. Gaurav. We're currently semi-decentralized. We actively listen to our community through weekly AMAs and implement their feedback. Our internal team reviews Illusor tickets and requests weekly to guide our development
Starting point is 00:05:42 priorities. Our community engagement primarily happens through x formerly twitter discord and our amas with over half a million followers across platforms ashon what technical challenges did you face while developing this platform given it's a novel concept without existing decentralized ai architectures garov our rapid scaling presented both opportunities and challenges. When I joined, the platform was designed for 100,000 GPUs, but we quickly needed to handle millions. This required significant architectural changes to manage security,
Starting point is 00:06:17 stability, and scalability. The founder recognized the need for experienced leadership in building scalable platforms, which led to hiring me and allowing me to build a team of experienced professionals from companies like Amazon, VMware, and top AI researchers. The key was having people who had previously built similar scalable systems. We've assembled a team including PhDs in machine learning and veterans from Ahore tech companies, all focused on solving these complex technical challenges while maintaining the decentralized nature of the platform. Ashan. Tell us more about the background of the team, how the journey started, what the first idea was, any pivots before arriving at this model, and what future you see for IO.
Starting point is 00:06:59 Net in the next 1-2 years. Gaurav. I joined about 7 months ago, roughly three to four months after the company was founded. From day one, the vision was to create a hybrid of DeFi and I platforms to enable builders to create models. When I joined, the founders and I aligned on a crucial strategy. We needed to offer something that would be extremely difficult for competitors to match. We identified GPU sourcing at competitive prices as that key differentiator. While other crypto platforms might offer similar pricing, they struggle with scale. If you ask them for 1,500 GPUs, they often can't deliver because their business model isn't truly
Starting point is 00:07:38 decentralized. Even if they create smart contracts, if they own their own data centers, scaling becomes extremely challenging. It is the same problem Azure faces. You can't claim to be decentralized just by adding smart contracts on top of centralized infrastructure. Ashan, software development is always challenging. When developing this platform, which is truly novel since there are no decentralized AI architectures for GPU hosting right now. What technical problems did you encounter? Gaurav. We faced an interesting challenge of scaling much faster than anticipated. A good problem from a business perspective but tricky from an engineering standpoint. Imagine building a platform for 100,000 GPUs and suddenly needing to handle half a million or more. During airdrops, we faced massive influxes
Starting point is 00:08:25 of users and potential Sybil attacks while scaling rapidly. Creating a secure, stable platform that could handle 50 to 100 clusters simultaneously, with no bottlenecks, while allowing for rapid supply additions of 1,000 GPS per minute. These were significant challenges. The founder recognized that while he could build the company to a certain level, taking it further required people with experience building scalable platforms and businesses. That's what I respect about him, he acknowledged this need and gave me the authority to build the right team. We've brought in talent from Amazon, VMware, and various other top companies. We have PhDs in machine learning, product experts from major tech companies. You can verify these backgrounds on our website. The founders supported this approach,
Starting point is 00:09:11 understanding that turning the product into a real business required people who had done it before. Their support in this transition has been crucial to our success. Don't forget to like and share the story. Tip Vested Interest Disclosure This author is an independent contributor publishing via our business blogging program. Hacker Noon has reviewed the report for quality, but the claims herein belong to the author. Hashtag D-Y-O-R. Thank you for listening to this Hacker Noon story, read by Artificial Intelligence. Visit HackerNoon.com to read, write, learn and publish.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.