Command Line Heroes - At Your Serverless: Development Empowerment with Control

Episode Date: December 4, 2018

What does serverless really mean? Of course there are still servers—the basics of the internet aren’t changing. But what can developers accomplish when someone else handles the servers? Serverless... computing makes it easy for beginners to deploy applications and makes work more efficient for the pros. Andrea Passwater shares how convenient it can be to abstract away (or remove from view) the infrastructure components of development. But as with any convenience, going serverless has tradeoffs. Rodric Rabbah explains that going serverless can mean giving up control of your deployment and restricts your ability to respond to problems—which is why he helped create Apache OpenWhisk, an open source serverless environment framework. And Himanshu Pant considers when to use serverless services. Serverless computing should be about developer empowerment. But we have to stay curious about the big picture—even as we simplify our toolbox. If you want to dive deeper into the question of serverless development—or any of the subjects we’ve explored this season—check out the resources waiting for you at redhat.com/commandlineheroes. While you’re there, you can even contribute to our very own Command Line Heroes game.

Transcript
Discussion (0)
Starting point is 00:00:00 But now, of course, all over the United States of America and all over the world, the Internet is revolutionizing our lives. It's 1998. Google just hired its first employee. And Vice President Al Gore is talking to the press. This technology is still in its infancy. When President Bill Clinton and I came into the White House, there were only 50 sites. And look at it now. I got a bouquet of virtual flowers on my birthday. Okay, I can sense your eyebrow arching already. Why am I playing you some bit of 20-year-old Internet history?
Starting point is 00:00:46 It's because I want to remind you that the basics of the internet are still the same. Sure, there are more than 50 sites now, I get it, but we're still sending virtual flowers. And from a developer's perspective, if you strip away all our incredible advances, you've still got that same client-server model that started it all. A client-server model that allows for a distributed network. Today, developers talk a lot about going serverless, which sounds like Al Gore's client-server internet just got trashed. And if we're not careful, we can abstract away so much infrastructure that we forget there are still servers out there doing their server thing.
Starting point is 00:01:36 But does serverless literally mean no servers? Really? Or is the developer's relationship with servers just evolving? In this episode, we're talking with people from around the world to explore this thing called serverless. I'm Saranjit Barik, and this is Command Line Heroes, an original podcast from Red Hat. Did you know wireless internet has wires somewhere? Andrea Passwater works for a company called, wait for it, serverless. They created a popular open source framework for developing serverless applications. Andrea is noticing how organizations are hungry for ways to abstract away infrastructure, which is what that magical word
Starting point is 00:02:26 serverless is always promising. I think the term is mostly just supposed to convey the fact that as a developer who works in serverless applications, that's abstracted for you. You don't have to worry about those servers. You just get to code and deploy that code up to a cloud provider and not have to worry about the administration. That's really what serverless means. For Andrea, the attractions of serverless are pretty obvious. If you develop applications in a serverless way, it gives you the ability to not have to think about the mundane parts, deploying that application and maintaining that application. It just means that you can focus on business value. You can focus on being creative.
Starting point is 00:03:12 And another big serverless bonus is you're less likely to find yourself reinventing the wheel. Why would you create your own way to handle authentication when services like Auth0 exist that you could just use and tap into. So at the end of the day, serverless is about giving developers the opportunity to be able to more easily and more rapidly build all of these ideas in their heads that they want to get out into the world. Imagine you've got your arms full of groceries and you're stumbling toward a door. The door slides open in a simple, friendly, allow me kind of way. That's serverless. It's opening the door for you, making development a lot less cumbersome. In fact, as organizations flock toward hybrid cloud setups and the serverless movement gets underway,
Starting point is 00:04:06 the barriers toward development are vanishing. Andrea's been hearing a lot of talk about non-developer development. So sort of stories from people who traditionally thought they couldn't code and who are now actually able to get into the software engineering game because of serverless and able to make these
Starting point is 00:04:25 tools that automate their own workflows and stuff like that. It doesn't matter what job you do. There's something you do on your job that is so rote, like you do it every single day. And it's this thing that you're like, you know, couldn't a computer do this for me? And I started to feel that way. And I happened to work at this company called Serverless and they were like, you realize that the product we make can help you with that, right? Andrea figures that pretty soon, a lot of people who never consider themselves developers are going to realize they can build simple apps themselves at essentially no cost. With Lambda, I've never had to pay for any of these small applications that I've made.
Starting point is 00:05:06 And I can make these bots that do part of my job for me. And I can become more efficient at my job, yes, but I also don't have to do that boring stuff anymore. And then I can do something that's more fun. And even for pro developers, that automatic door effect is pretty tempting in an arms-full-of-groceries kind of world. I think people are very attracted to the idea that they can get prototypes working with a one
Starting point is 00:05:32 or two person team in a very short amount of time, like a handful of days, they can get a prototype up and running. I think it makes it very exciting for people to start realizing that they get to just focus on what drives business value in their application or for their product, for their company. They get to focus on that business value. I'm going to throw another term at you. Ready? Functions as a service. That's the offering at serverless places like AWS Lambda or Apache OpenWhisk. Functions as a service means a single function can be executed on demand, only when triggered. And that's a lot more efficient.
Starting point is 00:06:15 Plus, I'm way less worried about compute capacity and runtime. End of the day, that serverless deal can be a pretty sweet setup. In fact, some folks have even started wondering, hey, are we going all in on serverless? Does it maybe replace containers? I see the point. Michael Hausenblaus is the developer advocate for the OpenShift team over at Red Hat. If you look at all these things we have here, you know, OpenShift and Cloud Foundry and whatnot, you have essentially these abstractions, right?
Starting point is 00:06:49 This idea that Heroku essentially invented, more or less, right? This very simple way of, don't worry about how the execution model, delivery model really looks like. Just give us the code and we take care of the rest. Yeah, that sounds pretty good. It kind of sounds like that dream of a no-ops environment where everything's automated
Starting point is 00:07:11 and abstracted away. It's like the developer's version of minimalist interior design. Nice, clean surfaces. But Michael wants to give you a little reality check. Oh, no ops, right? Magically, this will somehow go away. And you see these jokes on Hacker News and Twitter and whatever, like, servers, of course there are servers. Of course there's operations. Someone has to do that.
Starting point is 00:07:41 Someone has to rack the servers. Someone has to patch the operating system. Someone has to create container images because guess where these functions are executing? Of course, in some kind of container. they add a tool to the toolbox. And I've got some more news for you. Using that new tool, going serverless, doesn't just mean the ops is somebody else's problem. You've still got ops of your own to think about. So you see, there is operational bits on the infrastructure side, but also with the developers, right? If you're in an extreme case using, let's say, Lambda, then you have zero access to any kind of administrators, right? You cannot simply call or page an infrastructure administrator. So obviously, someone in your organization has to do that. But I fear that many organizations only see, oh, you know, it's so simple and cheap and we don't need to do this and this and this.
Starting point is 00:08:46 And then forget about, you know, who's on call and who really is on call. Do you have a strategy for that? If no, then you might want to come up with a strategy first before you, you know, go all in there. Someone needs to be on call. Even if you do go, quote unquote, serverless, you still need to have your head wrapped around that bigger picture. Someone needs to be on call. Even if you do go, quote unquote, serverless,
Starting point is 00:09:08 you still need to have your head wrapped around that bigger picture. You still need to get your operations in order. When I threw out that term earlier, functions as a service, did you cringe a little? Over the last while, cloud-based development has brought us an army of as-a-service terms. We've got infrastructure as a service. We've got platform as a service. We've got software as a service, data as a service, database as a service. You get the idea. And if you're having trouble keeping the differences straight, you're not alone. That's why we tracked down Heeman Chupan. He's a tech lead at the Royal Bank of Scotland over in Delhi,
Starting point is 00:09:58 India. And he spent years parsing out the differences here. These other computing paradigms are so similar sounding in name to serverless that one tends to forget that or tends to get confused as to why this is not being called a serverless or why this is being called a serverless. So serverless is not the same as containers. Serverless is not platform as a service. But He-Man Chu wanted to nail it down. What can functions as a service provide exactly? And what can't it?
Starting point is 00:10:35 He shared two anecdotes with us, two times when he figured out when to go with serverless and when to forego. The first moment came during a 24-hour hackathon. Heemanshu was trying to make a chatbot. There were various, you know, vectors on which this was going to be assessed. For example, the coverage of logic, the cost which would be incurred and the scalability. So I sat down to do this work in serverless as i did i realized that cost aspect was one aspect which kind of really tilted the scale in my favor so even though all the other participants they had a much better i would say coverage or maybe coverage of logic or the nlp situations or the
Starting point is 00:11:18 scenarios but as far as cost is concerned and scalability i was going hands down the winner over there because with serverless it all depended on how many invocations people are doing on that chatbot and accordingly the functions will be triggered and so this was one use case where i was very much happy to do serverless because number one yeah cost there was no cost faster development time and to be honest it was not exactly a production scale workload at that moment. And I could make do with the somewhat infant tooling of that platform. So yeah, it was a win for me. Nice. So that was a time when serverless made sense. But at the bank He-Manchu's working
Starting point is 00:11:59 in right now, they're migrating their systems from legacy to cloud. And that's bringing up different kinds of goals. We are trying to see which workload can go on to which paradigm. So should this go into IaaS, PaaS, Faas? Obviously, you know, once you come down to enterprise space, you need to see that there are no aspects like number one, let's say vendor lock-in. Number two, the technology should be proven extensively and more so for a risk-averse industry like, you know, banking sector. So this is where a platform as a service that still has a better proving and a better capability and a better tooling kind of takes the upper hand.
Starting point is 00:12:39 So he meant to choose looking at his own needs and his own comfort levels and curating which workloads make sense in which cloud computing paradigm. Let's say if our listener is working on a trading shop and he wants to build something which is event-driven. For him or her, serverless may not really be apt because the latency may not really be desirable in that kind of a mission-critical application.
Starting point is 00:13:04 End of the day, it's a measured approach instead of throwing everything into one bucket. When we're thinking about which cloud-based architecture is actually right for the work we want to do, there's one more thing to consider. How all that abstracting, all that taking things off your hands, can end up changing not just our work lives, but the finished work itself. Abstracting away part of our workload can mean less ability to customize. Think of a car you bought off the lot. It works. It drives. But then, think of a car you built on your own.
Starting point is 00:13:45 That one works the way you decided it would work. It comes at a cost. Rania Khalif is the director of AI engineering at IBM Research. In using these serverless applications, you may not have full control of everything that's going on. You don't have control of scheduling or when they'll run or where. There's a trade-off taking place, right? Fine-grained control may slip when you're using serverless. It abstracts so much away from the end user that if you do want to have more control, different scheduling, more checks and balances, different values on how long a function can run for, so on and so forth, then you really want to be able to get in there and tinker and maybe create your own deployments.
Starting point is 00:14:32 That would require something new, though. A new kind of serverless that open source communities are already building for themselves. Rania and her team at IBM are part of that movement. We first worked on a language that was basically JavaScript extensions to let you create these multi-threaded interactive service compositions as a starting point to give you a lighter weight way. And that was around the same time that cloud and microservices and platform as a service were really picking up. So just combining these two trends and saying there is this idea of being able to build higher order function from many small pieces that may or may not come from you. Rania and her team were building Apache OpenWISC, an open source functions platform. With OpenWISC from the beginning, we made it open source.
Starting point is 00:15:26 And a big part of that was to really enable the community to participate with us, but also to peel away the covers and give control to the people that are wanting to operate their own serverless computing environments so that they can customize it to their needs, maybe put in their own controls, see how it really works and control it better, but also provide the kind of finer-grained control that people wouldn't have with it if it was only offered as a service.
Starting point is 00:16:02 Giving control back to anyone who wants to operate their own serverless environment. It's next stage serverless. Joining OpenWhisk, you've got other open source platforms like Fission and Gestalt. And we start to see the serverless arena evolving into something more adaptable and more powerful than before. To really get why an open source version of serverless matters,
Starting point is 00:16:38 I got chatting with one of the founders of OpenWhisk. Hi Roderick, how's it going? Good, how are you? Thanks for having me on. Roderick Rabba was one of the three developers who conceived of and founded OpenWhisk. Here's our conversation. It tends to be confusing for others or, you know, tends to get snickers because people tend to think, well, how could you possibly compute without servers? Right, the server's there somewhere. It's just, I don't have to worry about it. Exactly. And that's really the beauty of this model. When you start developing in a serverless style, you never really want to go back. I've been in it for close to four years now,
Starting point is 00:17:14 and I've developed some production quality applications. And this is the only way I develop now. If you tell me I have to provision a machine and install an OS, it's completely foreign to me. I'm not even sure I'd know how to do it anymore. Yeah, I mean, when you put it like that, it sounds like a huge load off. Because when you initially hear serverless, at least I think, oh man, it's yet another thing I have to learn. But when you put it that way, it sounds nice. It does sound nice.
Starting point is 00:17:41 And then you have to realize that you have to take a little bit of air out of the bubble. It's not a silver bullet. What are some of the surprising risks or issues that people may not see or be aware of when they get started? I think the lack of transparency is possibly the biggest one. It's sort of reminiscent to me of technology that came about when new languages came about and raised the level of abstraction relative to the computer. It's a similar kind of startling effect in serverless today in that you write typically a function and then you just deploy that function. And it's instantaneously available to run, say, on the web as an API endpoint. It scales massively.
Starting point is 00:18:26 You can run thousands of instances without any work on your part. But if something goes wrong, it's like, how do I debug this? Or I actually want to inspect the context within which my function failed. Typically, these functions run within processes that are isolated from you. You can't even log into the machine to see where your code is running. They might run in container environments that are closed off. You don't know what's in them. So it becomes hard for you to get that little bit of transparency.
Starting point is 00:18:55 And this is where tools will eventually help. But the lack of tools sort of makes that a pretty significant pitfall for people to get their heads around. That was really good. Okay, let's go back to OpenWhisk, right? Tell me about that. So the project started right around the time Amazon Lambda announced their offering, which was really where serverless started to get into the nomenclature
Starting point is 00:19:20 and started to gain mindshare in the space. And when we saw Lambda, we started thinking, there's a lot of technology here that has to be developed, not just at the base layer in terms of a new cloud computer, but really in terms of the programming model that you put on top of it to make it more easily accessible to programmers. And coming out of IBM research, we had a pretty strong set of skills around programming language design, compiler expertise, and runtime expertise. So a small team of us, basically three people, got together to essentially do the initial development and
Starting point is 00:19:59 prototyped what became eventually OpenWISC with respect to the command line tools, which is really the programming interface for serverless today, the programming model concepts, and then the actual architecture that has to support essentially this function-as-a-service model and give you all the benefits that serverless espouses. So the genesis was really Amazon Lambda coming on the scene and saying, there's this new model of computing, pay attention. How long did it take? Or the first version anyway?
Starting point is 00:20:29 It happened quite fast. In fact, when IBM announced, what was at the time, IBM OpenWhisk, it was one year to the date from our first commit. Wow. Oh my goodness. That was quite exciting. That's really impressive. And actually, when it first started, it wasn't Open Whisk. It was just Whisk, right? Whisk was the internal name. That's right. And I'm responsible for the name. So the idea behind the name was to move quickly and nimbly. And you whip up a function and there it is. You can put it in the oven and bake it. That's wonderful because I was definitely thinking eggs when I saw that.
Starting point is 00:21:06 I was thinking, let's whisk some eggs. We've gotten some positives and some negatives on the name. When we open source the technology and put it out on GitHub, we put the open prefix on it to emphasize that this is open, as in open source and free to use, free to download, free to contribute to. Our goal in putting on open source was really to raise the bar in terms of what's available to execute these days as a serverless platform. It was important to us to build a platform that is not only production ready and share it with the world, but also to make it possible for academic research or research in general.
Starting point is 00:21:53 Maybe coming out of IBM research, we cared about that a little too much. But it sort of paid off in that I know of universities that actually use OpenWIS for their own research, from Cornell to Princeton. And I've gone to several universities, like Brown, Williams College, MIT, CMU, and I've given talks with the purpose of encouraging students to really look at the problems around serverless and functions of service, the tooling, the programming model, and get them excited about the technology, showing them that there's a path to where,
Starting point is 00:22:24 if they actually contribute to the open source project, it's picked up by IBM Cloud Functions and run in production usually within a week. Wow, that's so fast. And that's been surprising to some people. That's a super efficient process. It's really a testament to how we developed a lot of technology in the open.
Starting point is 00:22:43 It's not an open core model where there are some components that have been held back. What's running in the IBM cloud is really what's in the Apache OpenWISC project. When you think about the future of serverless and the options we may have moving forward, do you feel like they will inevitably be open? I think there's a raging debate these days about the value of open source, especially in the cloud. And if you consider why people go to the cloud or why they might have aversions to go into the cloud, it's this whole concept of vendor lock-in losing transparency. And so open source has played an important role
Starting point is 00:23:26 in alleviating some of these issues. And then you look at efforts like Kubernetes, which is just gobbling up the cloud in terms of the container management system and how successful that's been. And if you're doing something that even touches containers, does it even warrant having a discussion about keeping a closed source given how dominant it is? So I tend to think that openness helps. such as containers, does it even warrant having a discussion about keeping a closed source, given how dominant it is?
Starting point is 00:23:49 So I tend to think that openness helps. It's compelling from developers' perspectives. When you think about the future of the serverless ecosystem and tools and projects and services that we're going to see, what does that look like? What does the future of service look like for you? I think you start to think less and less about the underlying technology and it becomes more and more about the programming experience and the tooling around it. The tooling for debugging, the tooling for deployment management, the tooling for performance analysis, the tooling for security. All of these are fundamentally important. I think the underlying mechanics of how you run your function,
Starting point is 00:24:29 whether they run in a container or some future technology, whether you can run them on one cloud or multi-cloud, I think fades into the background, kind of like what Kubernetes did for containers and container management. In a similar way, there's a layering that's going to come on top, which is the function of the service layering to give you that kind of serverless notion. And then it's really about what's the new middleware that you're putting on top of it and how are you empowering developers to really take advantage of this new cloud
Starting point is 00:25:00 computer and the tooling that you're going to put around it to make their experience pleasant. Yeah. What does that empowerment look like? Efficiency, to put it in one word. It's the ability to just focus on the things that are of value to me as a developer or the value to my company if I'm working at a corporation. And so it's more rapid innovation that then you get out of that because you freed up your brain cells to not think about infrastructure and how things scale
Starting point is 00:25:31 and how things are secured at the hardware level. And now you can really innovate in terms of rededicating sort of that brain power to just innovating more rapidly, delivering more value to your end users. And I've lumped that all into just better efficiency. to just innovating more rapidly, delivering more value to your end users.
Starting point is 00:25:48 And, you know, I've lumped that all into just better efficiency. Roderick Raba is a founder of OpenWhisk. Remember what I said at the top of the show? That old server-client model that the internet is based on really isn't going anywhere. What's changing, and I mean radically changing, is the way we think about those servers. In a so-called serverless world, the hope is that we concentrate on the code itself and don't have to worry about infrastructure. But the level of abstraction we select and how we maintain control over work we don't abstract away are where that serverless world gets interesting.
Starting point is 00:26:34 Serverless should ultimately be about developer empowerment, the freedom from patching, scaling, and infrastructure management. But at the same time, we have to stay curious about how that big picture works. Even as we abstract some tasks away, we're going to be asking, what controls am I giving up? And what controls do I want to take back? Next episode, it's our epic season two finale. Command Line Heroes is taking...
Starting point is 00:27:12 A journey to Mars. We're learning how NASA's Martian rover is kicking off an open source revolution of its own. And we're hanging out with the CTO at NASA's Jet Propulsion Laboratory, no biggie, to learn how open source is shaping the future of space exploration. Meantime, if you want to dive deeper into the question of serverless development, or any of the subjects we've explored this season, check out the free resources waiting for you at redhat.com slash command line heroes. While you're there, you can even contribute to our very own command line heroes game. I'm Saran Yadbarek. Thanks for listening and keep on coding. Thank you. Why is that? Well, first, models aren't one-size-fits-all. You have to fine-tune or augment these models with your own data, and then you have to serve them for your own use case. Second, one-and-done isn't how AI works.
Starting point is 00:28:32 You've got to make it easier for data scientists, app developers, and ops teams to iterate together. And third, AI workloads demand the ability to dynamically scale access to compute resources. You need a consistent platform, whether you build and serve these models on-premise or in the cloud or at the edge. This is complex stuff, and Red Hat OpenShift AI is here to help. Head to redhat.com to see how.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.