ACM ByteCast - Xavier Leroy - Episode 57

Episode Date: August 15, 2024

In this episode of ACM ByteCast, Harald Störrle hosts ACM Fellow and Software System Award recipient Xavier Leroy, professor at Collège de France and member of the Académie des Sciences. Best know...n for his role as a primary developer of the OCaml programming language, Xavier is an internationally recognized expert on functional programming languages and compilers, focusing on their reliability and security, and has a strong interest in formal methods, formal proofs, and certified compilation. He is the lead developer of CompCert, the first industrial-strength optimizing compiler with a mechanically checked proof of correctness, with applications to real-world settings as critical as Airbus aircraft. In the past, he was a senior scientist at INRIA, a leading French research institute in computer science, where he is currently a member of the Cambium research team. His honors and recognitions also include the ACM SIGPLAN Programming Languages Achievement Award and the Milner Award from the Royal Society. Xavier shares the evolution of Ocaml, which grew out of Caml, an early ML (Meta Language) variant, and how it came to be adopted by Jane Street Capital for its financial applications. He also talks about his interest in formal verification, whose adoption in the software industry is still low due to high costs and the need for mathematical specifications. Harald and Xavier also dive into a discussion of AI tools like Copilot and the current limitations of AI-generated code in software engineering. The conversation also touches on ACM’s efforts to become a more global and diverse organization and opportunities to bridge the gap between academia and industry.

Transcript
Discussion (0)
Starting point is 00:00:00 This is ACM ByteCast, a podcast series from the Association for Computing Machinery, the world's largest education and scientific computing society. We talk to researchers, practitioners and innovators who are at the intersection of computing research and practice. They share their experiences, the lessons they've learned and their own visions for the future of computing. I'm your host, Harald Sterle. Our next guest is Xavier Leroy.
Starting point is 00:00:29 Xavier Leroy is a French computer scientist and programmer best known for his role as a primary developer of the OCaml system and programming language. He's a professor of software science at the Collège de France since 2018. And before that, he was a senior scientist at the Enria. He studied mathematics and computer science at the École Normale Supérieure in Paris, receiving a PhD in computer science in 1992.
Starting point is 00:00:55 He's an expert on functional programming languages and compilers. In recent years, he's taken an interest in formal methods, formal proof, and certified compilation. He's the leader of the CompCert project that develops an optimizing compiler for the C programming language formally verified in Coq. In 2015 he was named fellow of the ACA Association for Computing Machinery for contributions to safe high performance functional programming languages and compilers and to compiler a verification. He has also received the 2016 Milner Award by the Royal Society,
Starting point is 00:01:32 the 2021 ACM Software Systems Award, and the 2022 ACM SIGPLAN Programming Languages Achievement Award. So it is a very great pleasure to have you on. Bonjour à Paris et bienvenue à Villeroi. C'est un honneur de vous accueillir sur notre podcast. Merci. So thank you, Harald, for this invitation to talk with you and for the nice introduction. It's a pleasure to be here too. Of course, I'll have to ask about OCaml.
Starting point is 00:01:57 I've been using ML-style languages a fair bit myself back in my day when I was still programming, but also for teaching. And to me, OCaml always felt a lot more like a practical tool for working programmers than SMLC. There were all these small details that really made a difference to me when I was actually working. And that made me think, when they created this, they were really thinking about a working programmer. They really cared for the usability, for the practical tools. And I wonder how much of concern was that when you designed OCaml, this practical perspective of it?
Starting point is 00:02:32 Well, it was certainly on my mind. And actually, let me go back to my first encounter with ML-like languages. So there was already in France, in the group where I ended up doing my PhD thesis, there was already some work on an ML language called Camel, which was very much inspired by Milner's work, Robin Milner's work, like Standard ML of New Jersey. But, well, it was kind of heavyweight.
Starting point is 00:02:59 It was running only on workstations. But still, it was a beautiful language. When I was exposed to it as a grad student, I immediately fell in love with it. And so my first project with another grad student called Daniel Deligier was to do a lightweight implementation of KML. So simplifying a bit the language
Starting point is 00:03:17 and then having a very simple runtime system written in C by code compilation, really some kind of a minimal implementation. The first goal was to teach ourselves how it works. And that was very instructive, but it ended up being also the first open source implementation when we distributed it. And it worked well for teaching, popularizing the language. It was able to run on one of the design criteria was that it should run on the PCs on the Macs of the day in like one megabyte of RAM.
Starting point is 00:03:48 So that was some of the practical constraints that we took into account. And also, since it was pretty small, it was a very good vehicle for research and experimentation with the Lampage. And it's true that even at that time, it was obvious that making it a good, unique citizen, you know, with a command line compiler, something that no IDE, just maybe a little bit of integration into Emacs would be just easier because we had very limited resources.
Starting point is 00:04:13 And a few years later, after I did my PhD, after I postdoc at Stanford, I was back to France on a dominant position. And then I did the OCaml system with some of our colleagues. And then, yeah, we really tried to do a state-of-the-art compiler, something that would really be kind of a state-of-the-art
Starting point is 00:04:32 ML language with high-performance compiler with some language extensions like new module system and things like that and some support, some investigation into object-oriented programming. But it's true that we kept this efficient runtime system, relatively lightweight, command-line approach,
Starting point is 00:04:51 producing standalone executables and so on. And indeed, it paid a few years later. The initial uses for Camel and the initial goals for it were for symbolic processing, of course, OCR improving, program analysis, program transformation, those kind of things, compilations. But then we got our first users in systems, basically. Things like distributed systems at Cornell and later IBM.
Starting point is 00:05:23 And also, let me remind you, some early web experiments like web crawling, then some real-time trading at Jane Street Capital, which is still a big actor in the e-commerce community these days. Unikernels, so applications that boot straight on top of a hypervisor. The Mirage system, blockchains like the Tezos blockchain. So all those systems applications
Starting point is 00:05:47 that were absolutely not planned initially, but it turned out that OCaml is a pretty decent language for that because it's relatively lightweight in terms of resource usage. It has the memory management, for instance, has low latency. It has no big pauses.
Starting point is 00:06:03 So it's almost kind of soft real-time. And then, of course, you get the benefits of an ML-like language, so functional programming, type safety, and all those things. I think we found a pretty good niche in the world of functional programming
Starting point is 00:06:18 and programming languages in general between relatively low-level applications and the safety that you get from high-level functional languages. Right. Now, you mentioned Jane Street Capital and blockchain applications. Well, those are obviously financial applications. Are you aware of other areas besides that where OCaml is being used in industry today? And also, why is it that the financial world to OCaml is being used in industry today? And also, why is it that the financial world to OCaml? I mean, soft real-time capabilities, that's not unique to OCaml, right?
Starting point is 00:06:53 That is true. Yeah, maybe I'll answer the first part of the question. For Chainslid, it was a bit, well, it was because their CTO did a PhD at Cornell on this project, distributed systems project. So he was very keenly aware of what you can do with a language like OCaml. So serendipity, basically, right? A lucky accident in a way. But I think Jane Street liked the language, perhaps more than lower-level languages, because
Starting point is 00:07:22 the code is fairly readable and they could have it reviewed by non-computer scientists, also by financial engineers. That's very important for them. So I think in this aspect, it was a good deal. And maybe that's not a very good reason, but it helped them kind of set the bar higher for hiring. I mean, when you're a Java shop,
Starting point is 00:07:44 you have thousands of resumes that come from people who have kind of a standard training in CS. When you say, okay, you must be fluent in OCaml, you immediately get fewer resumes, but of higher quality, and often people who have masters or even PhDs. So it's also a way to select your program of population. So as regards other uses of OCaml,
Starting point is 00:08:08 so, yeah, so, well, industrial uses, well, there are some, like, static analysis tools, like the Absinthe company in Germany, they do the Astra static analyzer, which shows the absence of undefined behavior in C code, and it's used at Airbus and other, and in the automotive industry. So yeah, kind of a formal methods tool, basically.
Starting point is 00:08:32 Verification tools, code generation tools, like the SCADE compiler that is used to produce a lot of embedded code in cars and airplanes and trains, like the Eurostar train. So the compiler, which is a very critical piece of software, is also written in OCaml. And then, as I said, there's a large project, which is kind of interesting.
Starting point is 00:08:51 I'm not sure whether it's still academic or already kind of industrial. But the idea is really that you can, you don't really need, for many applications, you don't need a full operating system. You can just take your application, statically link it with low-level systems libraries like a TCP stack, for instance,
Starting point is 00:09:11 and some basic memory management, and then you get something that can boot on typically virtualized hardware. And it's good because it's very secure. It boots very quickly, so you can boot your mail server every time you get a new email
Starting point is 00:09:26 okay and stop it in between mails that's fine so it's really I think a new way of looking at systems and systems infrastructure
Starting point is 00:09:34 that maybe will have some big impact in the future I don't know sounds like some kind of microservice at a cloud environment it's the same kind of
Starting point is 00:09:43 same kind of ideas right right now I am interested in the role microservice at a cloud environment. It's the same kind of ideas. Right, right. Now, I am interested in the role of formal verification. I mean, if I ask my industry colleagues, they'll probably scoff at me and say, well, formal verification, who's interested in that? We don't need that. The client doesn't ask for it.
Starting point is 00:10:01 And then again, there are some applications, for instance, cloud infrastructure, where reliability is at a completely different level than regular information systems, say. And you just mentioned a couple of examples. And I wonder, I mean, formal verification basically started in the 1960s as a vision of a mathematician. And so many people have contributed to it over these decades. And so many tools have been created. And I know of a couple. And so many people have contributed to it over these decades. And so many tools have been created. And I know of a couple of applications. And I wonder, what is your view? Are we like very closely making this a practicality so that formal verification can be
Starting point is 00:10:39 done where it's worth? Is it always going to be confined to a niche of these high-risk applications, or do you see a path where this becomes much more common? All right. So, you heard that formal verification was born in the 60s on the work of pioneers like Floyd and Hoare. And I think it made slow progress in usability, things like SMT solving. So, progress in automated CRM proving helped make the approach of OR much more usable in the 2000s. There have also been some more automated techniques that have been developed in the meantime in the 80s, like model checking or abstract interpretation. So, the field progressed slowly from the academic side.
Starting point is 00:11:26 But you're right that even now, it's still a niche. Many applications don't want to do or think they don't need formal methods. But the niche in question is pretty important and I think it's slowly growing. So typically, it's life-critical software in transportation, like fly-by-wire airplanes or wireless metros. We were all in Paris and it had been verified. Well, some of the critical functionalities have been formally verified. Controlling nuclear plants or chemical plants, that can also be important, or big infrastructures like the electrical grid. And as you say, verification is finding its way into a few more niches like security.
Starting point is 00:12:09 I don't know if you know that or our audience knows that, but the cryptographic libraries are used by Chrome and by Firefox, which actually have been formally verified, one at MIT and the other at Microsoft Research. So this is finding its way into very popular tools, but only for the most very tricky pieces of code. Okay. Yeah.
Starting point is 00:12:32 And well, there's a lot of work lately at Amazon, AWS, well, using formal methods for more security, for other cloud infrastructure. And well, there's some infrastructure software that is slowly being verified. Not many, but. And, well, there are some infrastructure software that is slowly being verified. Not many, but there are, well, you mentioned my CompCert C compiler. That's an example of a fairly widely usable piece of infrastructure that is firmly verified.
Starting point is 00:12:57 So it gives a little more guarantees about the correctness of the generated code. And there's been this SCEL4 microkernel that's been very good in Australia, which is also quite an achievement. It's not a full operating system, but it's really the core security features of an OS. You can do an
Starting point is 00:13:15 app advisor pretty easily using their code. So these are some examples, I think, of important niches where formal verification has found a place. So what about wider usage? Well, of course, there's a cost to using formal methods. It takes more time.
Starting point is 00:13:34 It takes time to do the formal verification. You need to hire people or train people into using it, which is not that easy. You need to select the right tools, et cetera, et cetera. On the other hand, you also save some time debugging and testing. So when I did this CompServe verified compiler, I was very relieved that I didn't have to spend much time debugging the output. I mean, debugging a compiler is... I vaguely remember. But I think the main difficulty is that there's a prerequisite for all methods. You need mathematical specifications.
Starting point is 00:14:08 Yeah. Otherwise, you don't know what to verify against. And there are application areas where you have extremely precise specifications. Cryptography, for instance. All the math are written and standardized. Control command code, like this fly-by-wire system. These are all partial differential equations. The math are written back and they get standardized. Control command code like this fly-by-wire system, these are all partial differential equations. The math is there.
Starting point is 00:14:30 Databases have also pretty neat, pretty clean mathematical specifications and et cetera. But in many applications, there's many areas, there's just no kind of mathematical specification like a website or artificial intelligence applications. You know, what is a good tool to classify, a correct classification of your photos, a correct answer by chat GPD.
Starting point is 00:14:56 Okay, this is not mathematically well defined. To me, this is kind of the ultimate frontier. And what I would really like everyone to do is try to think of formal specifications for them probably. Even if they don't do full formal verification, there's a lot you can do when you have a spec. Okay? You can do
Starting point is 00:15:14 random testing, you can do all kind of runtime verification to find undefined behaviors, etc. And you can do static analysis that will show some guarantees, some correctness guarantees, but not all. And that would be much more lightweight than a full formal verification. Now we need to think in terms of formal specifications, not so much in terms of
Starting point is 00:15:34 formal methods. Well, you speak to my heart. My day job, I'm a product owner and I write specifications all day long. Oh, excellent. But I'm afraid it's more on the story and Jira ticket side than formal specifications. I would imagine, or I would like to think that part of my effectiveness derives from the fact that I have a formal methods background that makes it clearer in writing. You made an interesting point there about AI. Of course, AI is the big topic today, right? Everybody's talking about AI. It's the big news. Many people have gone as far as to say programming as we know it will disappear because the co-pilots are so good now. They provide you with lots of code. And I must say, I tried that out. And unfortunately, I wasn't able to get ChatGPT to write Camel code for me. It happily wrote code in Python, of course, and Java,
Starting point is 00:16:27 but no way it would write Camel or Prolog or any of these exotic languages. I guess that makes it a cool teaching tool because students can't cheat because they have to write the code themselves. Yeah, I don't know if that's a benefit or disadvantage for the languages. But yeah, you're right that those compiler kind of third generators
Starting point is 00:16:49 are very dependent on the training data, where there's a lot of training data, Python and Java and JavaScript. There's a lot less with YAML and other niche languages. And actually, that's an interesting point, because those systems are really very good at synthesizing code examples that they've already seen. You see that, for instance, on evaluations, if you give them problems from programming contests and so on. When solutions have been published, they will give some pretty good solutions. But then if you give them the latest edition of the contest, whose solutions have not been published yet,
Starting point is 00:17:29 they do pretty badly. They are really synthesizing existing knowledge more than synthesizing programs or classifications. And then, yes, sometimes the result is quite good, and sometimes it's completely wrong. I mean, it's not even syntactically correct. Okay, that's easy to check. And the part that makes me nervous
Starting point is 00:17:49 is when the result is slightly wrong. But just slightly wrong, right? Absolutely. There was an example from, I think, the human evaluation, where one of the chat GPDs produced a Python code that looked good, that passed the four or five tests that were part of the specification.
Starting point is 00:18:08 But if you looked at it very closely and tested it on other inputs, you saw that it was incorrect. And it was fairly subtle. The first four lines were perfect. They were doing exactly the right thing, which was to compute some lists in sorted order without repetitions. Okay, very good. And then the fifth line was
Starting point is 00:18:26 destroying the result by applying a Python trick, a well-known trick. You go to sets and then back to list that eliminates duplicates, but can change the order. So the result is not necessarily sorted. And that doesn't happen very often, but it happens.
Starting point is 00:18:41 And that's a Python idiosyncrasy. Okay, if you have the same code in OCaml, the fifth line would produce a correct result. It would be useless, but it would produce a correct result because when you convert from a set to a list in OCaml, it's always sorted. And that got me thinking.
Starting point is 00:18:58 So, yeah, this is good-looking code, but it's harder to review than if it were written by a human. Humans generally don't add bad code after the good code. They just produce bad code. It's harder to review for a human,
Starting point is 00:19:14 and it still needs testing, obviously, or some other kind of validation. And so, my feeling is that Copilot and other things are automating the pleasant part of software development, which is writing code, writing small functions given a specific case. If you're competent and it's rather pleasant, it's not a big deal. But what is difficult is, well, first of all, architecting the whole system.
Starting point is 00:19:42 And I don't think ChatGPT or Copilot is of any use there. And then validating, reviewing code, writing test suites, and so on. And really, I would prefer Copilot to write test suites, for instance, or do some of the reviews for me. I don't want to give up on programming. I would like help with other tasks.
Starting point is 00:20:03 So really, I'm not quite sure what to do with those AI-generated pieces of software, especially in kind of high-assurance applications of the kind we mentioned earlier. Yeah, I definitely don't want to have a high-assurance piece of code written by Gen AI, for sure. But there may be a good way to use those systems, which is, well, think of a mathematical proof, okay? If you ask a GPD to prove some mathematical statement, well, you will get a proof in English that will need to be reviewed. But let's assume the AI learns a theorem prover, an actual interactive theorem prover,
Starting point is 00:20:42 where you can write formal proof that can be checked posteriorly. Then, well, the AI can try to produce a proof, and then if it passes the checker, it's a good proof, and if it doesn't pass the checker, you ask for a new proof. And something similar could happen with software. If you produce a software plus all sorts of assertions, for instance, enough logical assertions that a theorem prover can verify it, then you're good. I think there are some interesting things
Starting point is 00:21:09 to do in this direction. It's just that it will be very hard because, again, training data, you know, assertions for programs or mechanized proofs, training data is pretty scarce. And there's no Wahoo effect that you get with just, get with just synthesizing code.
Starting point is 00:21:26 So probably we'll have to wait for a while before getting that. Yeah, it certainly doesn't sound like low-hanging fruit that we just have to reach out and grab for. ACN ByteCast is available on Apple Podcasts, Podbean, Spotify, Stitcher, and TuneIn. If you're enjoying this episode, please subscribe and leave us a review on your favorite platform. Another topic that I would like to cover with you since I think this is the first
Starting point is 00:21:54 time on ACM's podcast that we have a French interviewee and a German interviewer on this thing. I'm not sure whether you're aware, but the ACM has been trying to become a more global or international organization over the past 15 years and a lot more diverse than they used to be like 30 years ago or so. I always, and I'm of course part of that. And I wonder whether you have any idea as to how we could promote the diversity or the globality of the reach of ACM out of this little corner where there are still a lot of ACM sits, but they definitely want to go out there. Do you have any idea of how we could spread this into Europe, to Asia, Africa, the whole world? I'm not sure.
Starting point is 00:22:45 I mean, viewed from my perspective, I think ACM is doing pretty well. Well, my perspective being mostly conferences and journals and the more academic part of ACM. I think they are doing pretty well at opening all that to Europeans. In my area, programming languages research, there's a lot of research in Europe and things like conferences, for instance, alternate between the US
Starting point is 00:23:11 and Canada, between North America and Europe. And I think we feel pretty much recognized by SEM as academics. This may, I'm not sure this is the case for Asia, for instance. There's probably more to be done to connect with Japan and South Korea, which have pretty good research, at least in my area. And China is kind of closed anyway, or closing anyway, so that's yet another issue.
Starting point is 00:23:43 You're right that there's also more to ACM than just conferences and journals, and I'm not quite sure actually what, so I know there are some initiatives towards reaching out to our students, for instance, our potential students in CS, and maybe we could have more of that in Europe or more of that in Asia. It's true that I don't think any of my students have ever heard about ACM, except as a conference organizer, basically, and publishing house. So what could be done?
Starting point is 00:24:14 I'm not quite sure. Oh, no, I take that back. Well, there are the programming competitions, which have, well, there's a Southern European one and a Northern European one. And those are, well, they don't reach many students, but they reach a few. So perhaps more of that could be done.
Starting point is 00:24:34 And some mentoring sessions, some conferences, but still pretty low. Not quite sure what we could do specifically for you all. I have another difficult question for you. And that is, my own career oscillated between academia and industry. I went back and forth a couple of times. And I can't say that really benefited my career much. But it was an interesting ride, let's say.
Starting point is 00:24:58 My feeling is that there is very little ties between the academic world and the industrial world. So you were speaking of a formal method, or we have been speaking about formal methods and how they are not very commonly employed in day-to-day operations in industry. And that's certainly a fact. And probably for many problems, it's not the right tool. But I wonder whether there's not more opportunity to interact and to transfer, on the one hand, ideas and results from academia, and on the other hand, problems from industry, because we have plenty of interesting problems in industry. And I have a feeling that there are so
Starting point is 00:25:38 many contributions in academia that just get lost, right? That would need some application. And I feel I sense a gap between these two camps. And this always bothered me tremendously. And I've tried what I can to close that gap or to bridge it. I wonder, from the academic perspective that you represent here, what we could do to get closer together, academia and industry? Well, so I think I've had some good contacts with some industries i guess the point i would like to make first that well the computing industry is kind of wide the web shop and they are a company that makes mostly like under eye software erps and
Starting point is 00:26:22 so like probably doesn't have much contact with academia. That much is true in my experience. But some other companies like, well, for critical embedded systems, like I mentioned, or even some of the cryptocurrency and blockchain industries that popped out of nowhere recently have a lot more contact with academia. And there's also other industries that are not primarily computing industries,
Starting point is 00:26:48 but are also pretty demanding, like airplane manufacturers, for instance. Airbus is a really big name in high-assurance software, and they have lots of academic contacts in France and in Germany. So what I guess I'm trying to say is that I feel that I've had a good contact with some industrial users. I think about half of my PhD students went to industry, but I'm including also more research
Starting point is 00:27:17 oriented industries. So we mentioned Amazon, AWS, for instance, or Microsoft, or Google, etc. So we still have some contacts with them. And indeed, there are some good research issues, research problems that come out of them. So what could we do to go further than that? That's a good question, where I think our students really deserve and ask for industrial internships throughout their curriculum. And that's a very good opportunity to get to know what's going on in the industry. And I would like industry, at least in France, to hire more students with a PhD.
Starting point is 00:27:58 I can certainly answer that. In France, a master's is optimal for going to work in industry. And PhD, they tend to say, oh, no, no, you're too educated. You should stay in academia and so on. I think it's a big loss. I feel that in Germany, it's a little better. The PhD is more recognized by industries, and that's great. And then, yeah, and it's also important to have kind of personal contacts within an industrial group, not just especially in big companies.
Starting point is 00:28:30 Sometimes, you know, they have a couple of persons whose work title is academic relations. And usually those people are far off from the actual production groups and the actual groups where the real problem occurs. And sometimes it can be hard to talk through them and hear the real problems they may be having. But using like former students or other kind of personal contacts, you can get in touch with the actual R&D groups and then it becomes a lot more interesting.
Starting point is 00:28:59 Because they really love to talk about their problems. Yeah. And they have lots of interesting stories to tell. So yeah, I think there's maybe a little bit of a barrier to cause to just establish those kind of contacts. But I think there are some demand on both sides, the academics, like me, and the industrial people. So it's basically a call to improve networking
Starting point is 00:29:20 between the two camps. Yeah, that's true. And maybe we need more opportunities to do that. I get, living in Paris, I get lots of email for networking events in the Paris area. But it's true that most often I don't go because, well, I'm not quite sure
Starting point is 00:29:36 there's something relevant for me there. Maybe I should try harder. That's a good point. Well, I would be very happy to invite you to the next networking event if that should happen in Paris, but that's probably not going to happen for me. Anyway, one last question, and you kind of steer toward that all by yourself, and that is whether you would have any advice for students as they make the decision of what topic to study, what field to pursue, or maybe when they're already in CS and wonder where to go. You kind of made a negative advice earlier by saying, don't do a PhD if you want to get employed in industry, kind of.
Starting point is 00:30:18 Let me quantify that. You can do a PhD if you want to be hired in a fresh industry. It's just that it will not really be taken into account. So you will get the same salary and the same career most of the time. You start at the master's level. But you will still have learned
Starting point is 00:30:35 things in a PhD. So yeah, well, I guess it kind of connects with some advice I can give. So whether to choose computer science or some other pick, I think it's kind of a personal choice for me. Well, I was exposed to a bit of computer programming early, but it was fascinating, but also frustrating.
Starting point is 00:30:58 And so actually I studied mathematics and physics initially and switched to computer science quite late after getting my first theoretical computer science courses. And that was absolutely fascinating. I've always been mathematically inclined. What I like in computing is that it's also experimental. You can do concrete things with a computer, you can experiment. There's a practice that is informed by the theory, and that is wonderful. So I think if you're kind of mathematically inclined, computing is also something to be considered.
Starting point is 00:31:31 And on the other hand, if you're packing and tinkering with computers and so on, maybe computer science will teach you the basics, the fundamentals. And I think it's important. When I was tinkering with my Apple II in basic, I didn't go very far because I had no guiding principles. I just had a few magazines with a sample of code that I was trying to imitate. So CES will also give a good culture and CES will also give you a lot more assurance and confidence in what you can do as a computer professional. Now, if you're already into computing,
Starting point is 00:32:09 I think we are living in a time where there's a lot of opportunities for learning. Learning during the university in a CS degree, of course, but there's also learning by yourself. There's lots of online courses, tutorials. Wikipedia is a pretty good starting point for many questions. Stack overflow discussions can take you pretty far. And there's huge amounts of good code that you can read and learn from.
Starting point is 00:32:36 So in my time, that was less true. I remember learning a lot. I basically learned how to program in C by reading some very good source code, like the Emacs source code, which was very interesting and quite comprehensive. There was a list of some specific data structures,
Starting point is 00:32:54 some systems code, etc. It was really interesting. Or the early Linux kernel, which was a thing of beauty. Well, now it's a little too big, I guess, for discovery. But back in the day, it was pretty small and really illuminating. So yeah, reading source code,
Starting point is 00:33:11 maybe also participating in open source projects. Well, that's a good way to give back to the community, but there's also a good way to learn how it works, how to interact with other developers, how to do code reviews, how to do work with pull requests and so on. I think this is also a very formative experience. So there's many ways to train yourself continuously
Starting point is 00:33:31 as a computer professional. And this is just great. Well, thank you so much for these thoughts of yours and for sharing them with us. It was fantastic talking to you. We covered so many topics and I would like to go on for hours and hours but unfortunately we can't.
Starting point is 00:33:48 So I'll wrap it here and say thank you, as ever, and goodbye. Thank you. ACM ByteCast is a production of the Association for Computing Machinery's Practitioner Board. To learn more about ACM and its activities, visit
Starting point is 00:34:04 acm.org. For more information ACM and its activities, visit acm.org. For more information about this and other episodes, please visit our website at learning.acm.org. That's learning.acm.org. or slash ByteCast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.