Daybreak - Can an army of Indian engineers help Microsoft take on Nvidia?

Episode Date: February 6, 2025

Nvidia’s dominance in the AI market is forcing Big Techs like Microsoft to produce chips of their own. So, the software giant is changing its tack in hiring from Indian colleges. The Ken re...porter Abhirami G joins host Rahel Philipose in this episode. Tune in. Listen to 'One Billion in 10 Minutes', our new mini series based on The Ken's inaugural case competition. The Ken app Apple Podcasts Spotify

Transcript
Discussion (0)
Starting point is 00:00:01 Hi, this is Rohan Dharma Kumar. If you've heard any of the Ken's podcasts, you've probably heard me, my interruptions, my analogies, and my contrarian takes on most topics. And you might rightly be wondering why am I interrupting this episode too. It's for a special announcement. For the last few months, I and Sita Ramon, Ganesh, my colleague and the Ken's deputy editor, have been working on an ambitious new podcast. It's called Intermission.
Starting point is 00:00:29 We want to tell the Sita Ramancahans, my colleague. secret sauce stories of India's greatest companies. Stories of how they were born, how they fought to survive, how they build their organizations and culture, how they managed to innovate and thrive over decades, and most importantly, how they're poised today. To do that, Sita and I have been reading books, poring over reports, going through financial statements, digging up archives, and talking to dozens of people. And if that wasn't enough, we also decided to throw in video into.
Starting point is 00:01:01 to the mix. Yes, you heard that right. Intermission has also had to find its footing in the world of multi-camera shoots in professional studios, laborious editing, and extensive post-production. Sita and I are still reeling from the intensity of our first studio recording. Intermission launches on March 23rd. To get an alert, as soon as we release our first episode, please follow Intermission on Spotify and Apple Podcast. or subscribe to the Ken's YouTube channel. You can find all of the links at the ken.com slash I am. With that, back to your episode.
Starting point is 00:01:44 You remember Donald Trump's inauguration from a few weeks ago, right? It was nothing like we had ever seen before. He had all the big tech bosses standing right behind him, from Microsoft Satyana Della to Amazon's Jeff Bezos, even TikTok CEO, Show Chu was there. But you know who wasn't invited? Jensen Wang, the CEO of Envidia, the American company that has been fueling the AI revolution around the world and in China. And then just about a week later, the world was taken by shock when Deep Zeak, a Chinese startup launched its low-cost AI assistant.
Starting point is 00:02:24 Investors started dumping stocks and the rest we know is history. Nvidia lost $600 billion of its market cap that day. As tragic as it may have been for the king chipmaker, it won't be a stretch to say that many tech companies must have secretly rejoiced. No one likes a monopoly after all. Unfortunately though, taking on Nvidia is not everyone's cup of tea. It's going to take some muscle power, the kind that only big tech companies have. Just take Microsoft for example.
Starting point is 00:02:58 It wants to challenge Nvidia's monopoly in the AI chip market. In fact, it's already built two of its own chips, Maya and Cobalt. Why we are talking about this today is because of two things. Number one, Microsoft is essentially known as a software company, but now is doubling down on chip making, which is hardware stuff. And number two, fueling this effort is actually Microsoft's India unit. Earlier on January, when Satya Nadella was on an India tour, he announced a $3 billion investment for cloud and AI infrastructure in the country.
Starting point is 00:03:32 This will include the establishment of new data centers. I don't know if you know this, but 20% of the world's chip design talent actually comes from India. So it comes as no surprise that the tech giant has been scouring Indian universities to gather a special army of engineers to target Nvidia's AI chip empire. Hello and welcome to a special episode of Daybreak, a business podcast from the Ken. I'm your host, Rahal Philipos, and I don't chase the news cycle.
Starting point is 00:04:02 Instead, every day of the week, my colleague, Snigda Sharma and I come to you with one business story that is worth understanding and worth your time. In this special episode of Daybreak, I speak to the Ken reporter Abhiramiji about how Microsoft has been taking on
Starting point is 00:04:16 the NVIDIA monopoly. Okay, so Abirami, this story begins bang in the middle of placement week at, you know, a few of the IITs. This new role suddenly, pops up on the student placement portal and turns out Microsoft is looking for
Starting point is 00:04:58 Silicon Hardware Engineers. Explain to me as someone who hasn't been to an engineer in college, why is this so unusual? Basically, the fact is that like, you know, when you hear Microsoft, you think software. Like, about 90% of the people I reached out to
Starting point is 00:05:14 in Microsoft didn't even know there is a hardware side of things, like a hardware role at all. So it is interesting to know that, okay, like, you know, there is something going on here which is so new that even people in Microsoft don't know about it. It's a really large organization and like, you know, one side of it doesn't know what the other side does.
Starting point is 00:05:34 So if people in Microsoft don't know about it, surely people outside also don't know about it much. So that was where it kind of started and, you know, it was possible to like get, you know, to like this place through some digging. But like it was a lot of digging. Right, right. But can we talk about what role these engineering pressures were being hired for? Or like, what would they actually end up doing at Microsoft?
Starting point is 00:06:01 Okay, so the role that they were hired for is something called a Silicon Hardware Engineer. A Silicon Engineer, basically, which sounds very fun. But I'm assuming it involves a lot of, like, coding, which I cannot do. But basically, these are mainly like electronics engineers because, like, these, you know, their curriculum actually does cover this kind of thing. The engineers that ended up being hired will join a team called Silicon Cloud Hardware and Infrastructure Engineering.
Starting point is 00:06:33 It doesn't really roll off the tongue, but this team is close to two years old and being part of it gives you a fair bit of street cred. In Microsoft India, these engineers will be just three reporting levels below the country head. So it's a pretty high-level team compared to most others in the organization. Okay, but Abhirama, can you explain to me what they will actually be doing?
Starting point is 00:06:56 Essentially, like, you know, from what I understand from people who do work in the, like, electronics and journal space and the chip design space, there is, like, a couple of different processes that go into, like, chip design, which is, you know, verification, testing and design itself. So, chip designers, what they do is that they kind of, like, make a design for, like, what the chip would look like. And, you know, it sounds easy when you put it like that, but there's like so many, like, tiny circuits.
Starting point is 00:07:25 There's so many things that go on in, like, a circuit, like, at the microscopic level. So, you know, there is a lot of work that goes into this. A verification engineer, their job is to kind of, like, stress test this. It's kind of like, you know, you put this chip that you have designed for one specific use case through, like, many different, like, kind of, like, testers, like, you know, see whether it performs in extreme conditions. Yeah, so according to, like, people I talked to for the story,
Starting point is 00:07:55 you need four verification engineers for every design and, you know, chip designer, which means that a lot of, like, you know, chip designers start off their carriers probably as verification engineers because, like, verification is, like, a lower-level, you know, set of tasks to do. Like, you know, freshers are mostly hired for, like, verification and testing kind of jobs. So these recruits will be testing, verifying, and eventually designing chips that Microsoft will use for Azure, its cloud computing division. You know, a lot of companies have armies of like verification engineers
Starting point is 00:08:32 sitting in India doing all the work that's being outsourced to them. You see, these providers, the likes of Azure, Amazon Web Services, and Google Cloud Platform have ended up becoming some of the most direct beneficiaries of the AI boom. Just take Microsoft's intelligent customers. cloud division, which houses Azure and its AI services. Its revenues jumped 40% between FI22 and FI24 to cross $100 billion. Just to put that in context, revenues from all of its other business divisions combined, increased only 13% to $140 billion in the same period.
Starting point is 00:09:09 But there is one player that's preventing Microsoft and its peers from making even more money. I'm talking about Nvidia. The end of the day, you know, like, it's not like any of these companies really like that they have to like depend on Nvidia for this side of things because, you know, Nvidia has a monopoly on the space. Like all the other players like, you know, AMD and Intel are like quite a bit behind Nvidia and Nvidia is recognised as a market leader of sorts. So the goal for the likes of Microsoft is to reduce dependence on Nvidia.
Starting point is 00:09:44 You see, Nvidia's graphics processing. units or GPUs are considered the best in the business. It is the only company that manufactures GPUs capable of handling high-speed workloads, which is great, except when an AI model is run on Azure using an Nvidia GPU, Microsoft's margins drop to 15 to 20%. Compare that to a regular non-AI workload program on Azure's CPU. That would bring in margins of about 65% on average. So it makes sense for Microsoft.
Starting point is 00:10:17 to make its own chips. And it has. It's released two since 2023, Maya and Cobalt 100. I mean, see, Microsoft has a clear purpose here, which is cost-cutting. They want to, like, do, you know, have their, like, processes run as cheaply as possible.
Starting point is 00:10:36 And also, a GPU sometimes has overkill for a lot of things you want to do. So when that, I mean, like, in that case, like, it doesn't, like, hurt to have, like, a slightly lesser performing chip that cuts down on a lot of your costs if you can kind of, you know, make it like works cheaper than getting an Nvidia GPU. Because of NVDA's business being a monopoly, they can kind of price it at whatever point they want.
Starting point is 00:11:00 And that's not really helpful for like companies who want to cut costs. Like, it works for startups and it works for like companies who, you know, don't really like have an alternative but to like go with a provider. But I think for a big tech company that has the capability to source tax, at the scale that it can, it makes sense for them to develop internally. And I think that's why Microsoft is doing this. Okay.
Starting point is 00:11:24 And, you know, see, I understand what you're saying about how it makes sense for them. You know, I get that. But at the same time, there is a kind of resentment that is growing within big tech against Nvidia, right? Oh, absolutely. And not just big tech. Like, I think the entire AI ecosystem of sorts. And, you know, I think that resentment is in.
Starting point is 00:11:46 in some ways well-founded because you know, you are like, okay, it is the only player that's available if you want to do like cutting-edge AI work. Like, there are still people who use AMD, but at the same time, a lot of it is for research purposes, you can't. And, like, you know, if you want to do, like, the fastest kind of, or, like, most efficient kind of work, Nvidia is still, like, the go-to for most players. and I don't think anyone really wants to talk about this resentment openly because again at the same time they still depend on NVIDIA
Starting point is 00:12:22 you know but at the same time I can see you know you kind of can't see that there is like this undercurrent through like tech circles and you know people do want alternatives so yes tech companies of all sizes have been grumbling about the NVIDIA monopoly but only big tech firms have the muscle
Starting point is 00:12:44 to do anything about it. Which is why Microsoft launched Maya and Cobalt in the first place. That team of engineers in the Silicon Cloud hardware and infrastructure engineering team will have a big role to play going forward. And now the stakes are even higher. Because Nvidia isn't content with just being the preferred chip maker for AI workloads anymore,
Starting point is 00:13:05 it's planning to launch a development environment of its own to write AI programs. Essentially a serverless API. Here what, Invitical. means by a serverless API is that if you have an Nvidia GPU before this you kind of needed
Starting point is 00:13:22 it to be connected to something like an Azure or like an AWS to be able to use the capabilities of the GPU especially if you are an enterprise customer because you know like you can't like you know a piece of hardware without like software to like use its capabilities
Starting point is 00:13:40 is just a piece of hardware like you can't really like do anything with it And what Nvidia is giving you the option to do with like a serverless API is that you can basically access the capabilities of the GPU without having like
Starting point is 00:13:55 an intermediate development environment. So, you know, you don't need to like have like the capabilities of an Azure or whatever to like run these kind of models. Invadia provides its own interface to you. Now this is bad news for the likes of Azure and AWS, the intermediaries, because it is essentially.
Starting point is 00:14:13 essentially cuts them out of the equation entirely. And that is a threat to these guys because it, you know, brings different kinds of providers who have GPUs, but don't necessarily have, like, the tech to build over the GPUs into the picture. And these are mainly cloud providers. Because cloud providers are the ones who have, like, this corpus of GPUs, you know, the stockpile of GPUs just like, I won't say lying around because they are being used, obviously.
Starting point is 00:14:41 but it makes it much easier for them to approach customers and be like, hey, look, we have GPUs. NVDA is providing this kind of software layer. You don't really need to like, you know, have like any kind of like AWS or Azure credits to use this. Because obviously like using these, you're using like AWs as your kind of uses up money. Like you need to like have credits, you know, it's a whole thing. So these cloud providers can be like, okay, we can like offer these. to you at a competitive price
Starting point is 00:15:12 and that puts them in the running with like these big tech players as well which is where like you know Nvidia kind of sees an advantage because a lot of Nvidia's business now comes from data centers. So for now at least it looks like Microsoft's Silicon hardware engineer's
Starting point is 00:15:29 role will return to India's engineering college campuses next year as well. Daybreak is produced from the newsroom of the Ken India's first subscriber-focused business news platform. What you're listening to is just a small sample of our subscribe-only offerings. A full subscription unlocks daily long-form feature stories, newsletters and podcast extras.
Starting point is 00:15:55 Head to the ken.com and click on the red subscribe button on the top of the Ken website. Today's episode was hosted and produced by Rahal Philipos and it was edited by Rajiv Sien.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.