Screaming in the Cloud - Siphoning through the Acronyms with Liz Rice
Episode Date: March 8, 2022About LizLiz Rice is Chief Open Source Officer with cloud native networking and security specialists Isovalent, creators of the Cilium eBPF-based networking project. She is chair of the CNCF'...s Technical Oversight Committee, and was Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, and competing in virtual races on Zwift.Links:Isovalent: https://isovalent.com/Container Security: https://www.amazon.com/Container-Security-Fundamental-Containerized-Applications/dp/1492056707/Twitter: https://twitter.com/lizriceGitHub: https://github.com/lizriceCilium and eBPF Slack: http://slack.cilium.io/CNCF Slack: https://cloud-native.slack.com/join/shared_invite/zt-11yzivnzq-hs12vUAYFZmnqE3r7ILz9A
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
Today's episode is brought to you in part by our friends at Minio,
the high-performance Kubernetes native object store that's built for the multi-cloud,
creating a consistent data storage layer for your public cloud instances,
your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as,
which depends probably on where you work. It's getting that unified is one of the greatest
challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade
security and resiliency, the speed to run any workload, and the footprint to run anywhere. And that's exactly what Minio offers. With superb read
speeds in excess of 360 gigs and a 100 megabyte binary that doesn't eat all the data you've got
on the system, it's exactly what you've been looking for. Check it out today at
min.io slash download and see for yourself.
That's min.io slash download, and be sure to tell them that I sent you.
This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing
DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function
could be used as a pivot point to get
access into your environment. They've also gone deep in depth with a bunch of other approaches to
how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I
sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
One of the interesting things about hanging out in the cloud ecosystem,
as long as I have and as, I guess, closely tied to Amazon as I have been,
is that you learn that you never quite are able to pronounce things
the way that people pronounce them internally. In-house pronunciations are always a thing.
My guest today is Liz Rice, the Chief Open Source Officer at Isovalent, and they're responsible for,
among other things, the Cilium Open Source Project, which is around eBPF, which I can only
assume is internally pronounced as eBF. Liz, thank you for joining me today and suffering
my pronunciation slings and arrows. I have never heard eBF before, but I may have to adopt it.
That's great. You also are currently in a term that is winding down, if I'm not misunderstanding.
You were the co-chair of KubeCon and CloudNativeCon at the CNCF, and you are also currently on the Technical Oversight Committee for the foundation.
Yeah, yeah, I'm currently the chair, in fact, of the Technical Oversight Committee.
And now that Amazon has joined, I assumed that they had taken their horrible pronunciation habits, like calling AMIs AMIs and whatnot, and started spreading them throughout the ecosystem with wild abandon. Are we going to have to start calling CNCF knooks or something?
Exactly. They're very frugal, by which I mean they never buy a vowel. So yeah,
it tends to be an ongoing challenge. Joking and the rest aside, let's start, I guess,
at the macro view that the CNCF does an awful lot of stuff, where if you look at the CNCF landscape,
for example, like I think some of my jokes on the internet go a bit too far, but you look at this
thing, and last time I checked, there were something like four or five hundred different
players in various spaces, and it's a very useful diagram. Don't get me wrong by any stretch of the
imagination, but it also is one of those things that is so staggeringly vast that, well, I've got
to level with you on this one. Given my old ancient system and roots, it's the hell with it. I'm going
to run some VMs and a three-tiered architecture, just like grandma and grandpa used to do and call
it good. Not really how the industry has evolved, but it's overwhelming. That might be the right
solution for your use case. So, you know, don't knock it if it works. Oh yeah. If it's a terrible architecture and it works, is it really that terrible of an architecture?
One wonders. Yeah, yeah. I mean, I'm
definitely not one of those people who thinks, you know, every solution
has the same, you know, is solved by the same hammer. You know, all problems
are not the same nail. So, I am a big fan
of a lot of the CNCF projects,
but that doesn't mean to say
I think those are the only ways to deploy software.
You know, there are plenty of things like Lambda,
a really great example of something
that's super useful and very applicable
for lots of applications
and for lots of development teams.
Not necessarily the right solution for everything.
And for other people,
they need
all the bells and whistles that something like Kubernetes gives them, you know, horses for
courses. It's very easy for me to make fun of just about any company or service or product.
But the thing that always makes me set that aside and get down to brass tacks has been,
okay, great. You can build whatever you want. You can tell whatever glorious marketing narrative you wish to craft, but let's talk to a real customer. Because once
we do that, then if you're solving a problem that someone is having in the wild, okay, now it's no
longer just this theoretical exercise in PowerPoint. Now let's actually figure out how things work when
the rubber meets the road. So let's start, I guess, with, I'll leave it to
you. Isovalent are the creators of the Cilium EBPF-based networking project. Yeah. And EBPF
is the part of that that I think I'm the most familiar with having heard the term.
Would you rather start on the company side or on the EBPF side? Oh, I don't mind. Let's,
why don't we start with EBPF? Yeah.. So, easy, ridiculous question. I know that it's extremely important because Brendan Gregg
periodically gets on stage and tells amazing stories about this. The last time he did stuff
like that, I went stumbling down into the rabbit hole of D-Trace, and I have never fully regretted
doing that, nor completely forgiven him. What is EBPF?
So, it stands for Extended Berkeley Packet Filter. And we can
pretty much just throw away those words because it's not terribly helpful. What eBPF allows you
to do is to run custom programs inside the kernel. So we can trigger these programs to run maybe because a network packet arrived
or because a particular function within the kernel has been called
or a trace point has been hit.
There are tons of places you can attach these programs to
or events you can attach programs to.
And when that event happens, you can run your custom code.
And that can change the behavior of the kernel, which is great power and great responsibility, but incredibly powerful.
So Brendan, for example, has done a ton of really great pioneering work showing how you can attach these eBPF programs to events, use that to collect metrics.
And lo and behold, you have amazing visibility
into what's happening in your system.
And he's built tons of different tools
for observing everything from, I don't know,
memory use to file opens to...
There's just endless dozens and dozens of tools
that Brendan, I think, was probably the first to build.
And now there's sort of new generations of eBPF-based tooling that are kind of taking that legacy, turning them into maybe more, I'm going to say, user-friendly interfaces,
you know, with GUIs and hooking them up to metrics platforms and, in the case of Cilium, using it for networking and hooking it into Kubernetes
identities and making the information about network flows meaningful in the context of
Kubernetes, where things like IP addresses are ephemeral and not very useful for very long. I
mean, they just change at any moment. I guess I'm trying to figure out what part of the stack this winds up applying to, because you talk
about, at least to my mind, it sounds like a few different levels all at once. You talk about
running code inside of the kernel, which is pretty close to the hardware. It's, oh, great,
it's adventures in assembly is almost what I'm hearing here. But then you also talk about using
this with GUIs, for example, and operating on individual packets to run custom programs. When you talk about running custom programs,
are we talking things that are a bit closer to, oh, modify this one field of that packet
and then call it good? Or are you talking, now we launch Microsoft Word?
Much more the former category. So yeah, let's inspect this packet and maybe change it a bit
or send it to a different, you know, maybe it was going to go to one interface, but we're going to send it to a different interface.
Maybe we're going to modify that packet.
Maybe we're going to throw the packet on the floor because we don't.
And there's some really great security use cases for inspecting packets and saying, this is a bad packet.
I do not want to see this packet.
I'm just going to discard it. And there's some, what they call packet of death vulnerabilities
that have been mitigated in that way.
And the real beauty of it is you just load these programs dynamically.
So you can change the kernel on the fly
and affect the behavior just immediately and affect,
you know, if there are processes already running,
they get instrumented immediately.
So maybe you run a BPF program to spot when a file's opened.
New processes, existing processes, containerized processes,
it doesn't matter.
They'll all be detected by your program
if it's observing file open events.
Is this primarily used from a security perspective?
Is it used for, what are the common use cases for something like this?
There's three main buckets, I would say.
Networking, observability, and security.
And in Cilium, we're kind of involved in some aspects of all those three things.
And there are plenty of other projects that are also focusing on one or other of those aspects.
This is where, I guess, the challenge I run into of the whole CNCF landscape is.
I think the danger is, when I started down this path that I'm on now, I realized that, oh, I have to learn what all the different AWS services do.
This was widely regarded as a mistake. They are not Pokemon. I do not need to catch them all.
The CNCF landscape applies very similarly in that respect. What is the real world problem space for
which eBPF and or things like Cilium that leverage eBPF, because eBPF does sound fairly low level,
that turn this into something that solves a problem people have? In other words, what is the problem that Cilium
should be the go-to answer for when someone says, I have this thing that hurts?
So at one level, Cilium is a networking solution. So it's a Kubernetes CNI. You plug it in to provide connectivity between your
applications that are running in pods. Those pods have to talk to each other somehow,
and Cilium will connect those pods together for you in a very efficient way. One of the really
interesting things about eBPF for networking is we can bypass some of the networking stack. So if we are running in
containers, we're running our applications in containers in pods, and those pods usually will
have their own networking namespace. And that means they've got their own networking stack.
So a packet that arrives on your machine has to go through the
networking stack on that host machine, go across a virtual interface into your pod, and then go
through the networking stack in that pod. And that's kind of inefficient. But with eBPF, we can
look at the packet the moment it's come into the kernel. In fact, in some cases, if you have the
right networking interfaces, you can do it while it's still on the kernel. In fact, in some cases, if you have the right networking interfaces,
you can do it while it's still
on the network interface card.
So you look at that packet and say,
well, I know what pod that's destined for.
I can just send it straight there.
I don't have to go through the whole
networking stack in the kernel
because I already know exactly where it's going.
And that has some real performance improvements.
That makes sense. In my explorations, we'll call it, with Kubernetes, it feels like the universe,
at least the time I went looking into it, was step one. Here's how to wind up launching Kubernetes
to run a blog, which is a bit like using a chainsaw to wind up cutting a sandwich. Okay,
massively overpowered, but I get the basic idea. Like, okay, what's project step two? It's like, oh, great, go build Google.
Okay, great. It feels like there are some intermediary steps that have been sort of
glossed over here. And at the small scale that I kicked the tires on, things like networking
performance never even entered the equation. It was more about get the thing up and running.
But yeah, at scale, when you start seeing huge numbers of containers
being orchestrated across a wide variety of hosts,
that has serious repercussions and explains an awful lot.
Is this the sort of thing that gets leveraged by cloud providers themselves?
Is it something that gets built in mostly on-prem environments,
or is it something that rides in almost user land
for most of these use cases
that customers come out of bringing to those environments?
I'm sorry, users, not customers.
I'm too used to the Amazonian phrasing
of everyone is a customer.
No, no, they are users in an open source project.
Yeah, so if you're using GKE,
the GKE data plane V2 is using Cilium.
Alibaba Cloud uses Cilium.
AWS is using Cilium for EKS Anywhere.
So these are really, I think, great signals that it's super scalable.
And it's also not just about the connectivity,
but also about being able to see your network flows and debug them.
Because like you say, day one, your blog is up and running,
and day two, you've got some DNS issue that you need to debug,
and how are you going to do that?
And because Cilium is working with Kubernetes,
so it knows about the individual pods,
and it's aware of the IP addresses for those pods and it can map those
to you know what's the pod what service is that pod involved with and we have a component of
Citium called Hubble that gives you the flows the network flows between services so you know
we've probably all seen diagrams showing service A talking to service B, service C, some external connectivity.
And Hubble can show you those flows between services and the outside world, regardless of how the IP addresses may be changing underneath you and aggregating network flows into those services that make sense to a human who's looking at a Kubernetes deployment.
Running gag that I've had is that one of the drawbacks and appeals of Kubernetes all at once
is that it lets you cosplay as a cloud provider, even if you don't happen to work for one of them.
And there's a bit of truth to it, but let's be serious here. Despite what a lot of the cloud
providers would wish us to believe via a bunch of marketing, there's a tremendous number of data center environments out there,
hybrid environments, and companies that are in those environments are not somehow
laggards or left behind technologically or struggling to digitally transform. Believe it
or not, I know it's not the common narrative, but large companies generally don't employ people who lack critical thinking skills and strategic insight.
There's usually a reason that things are the way that they are.
And when you don't understand that, my default approach is that, oh, there's context that gets missing.
So I want to preface this with the idea that there's nothing wrong in those environments. But in a purely cloud-native environment,
which means that I'm very proud about having no single points of failure
as I have everything routing to a single credit card
that pays the cloud providers, great.
What is the story for Cilium
if I'm using effectively the managed Kubernetes options
that name any cloud provider will provide for me these days?
Is it at that point no longer for me,
or is it something that
instead expresses itself in ways I'm not seeing yet? Yeah, so I think as an open source project,
and it is the only CNI that's at incubation level or beyond, so it's a CNCF-supported
networking solution. You can use it out of the box. You can use it for your tiny blog application.
If you've decided to run that on Kubernetes, you can do so. I think things start to get much more
interesting at scale. I mean, that continuum between, you know, there are people purely on
managed services. There are people who are purely in the cloud. Hybrid cloud is a real thing. And
there are plenty of businesses who have good reasons to have some things in their own data
centers, some things in the public cloud, things distributed around the world. So they need
connectivity between those. And Cilium will solve a lot of those problems for you in the open source.
But also if you're telco scale and you have things like bgp
networks between your data centers then that's where the paid versions of psyllium the enterprise
versions of psyllium can help you out and as isovalent that's our business model to have like a
we fully support or we contribute a lot of resources into that open source Cilium, and we want that to be the best networking solution for anybody.
But if you are an enterprise who wants those extra bells and whistles and the kind of scale that, you know, a telco or a massive retailer or a large media organization or name your vertical, then we have solutions for that
as well. And I think that's one of the really interesting things about the eBPF side of it
is that, you know, we're not bound to just Kubernetes, you know, we run in the kernel and
it just so happens that we have that Kubernetes interface for allocating IP addresses to
endpoints that happen to be pods.
So back to my crappy pile of VMs
because the hell with all this
newfangled container nonsense,
I can still benefit from something like Celia.
Exactly, yeah.
And there's plenty of people using it
for just load balancing,
which why not have an eBPF-based
high-performance load balancer?
Hang on, that's taking me a second
to work my way through.
What is the programming language for eBPF?
Is it something custom?
Right, so when you load your BPF program into the kernel,
it's in the form of eBPF bytecode.
There are people who write eBPF bytecode by hand.
I am not one of those people.
There are people who used to be able to write sendmail configs
without running it through the M4 preprocessor,. I am not one of those people. There are people who used to be able to write sendmail configs without running it
through the M4 preprocessor,
and I don't understand those people either.
So our choices are, well,
it has to be a language that can be compiled
into that bytecode.
And at the moment, there are two options,
C and, more recently, Rust.
So the C code, I'm much more familiar with writing BPF code in C.
It's slightly limited. So because these BPF programs have to be safe to run,
they go through a verification process which checks that you're not going to crash the kernel,
that you're not going to end up in some hard loop and basically make your machine completely unresponsive. We also have to know that BPF programs, they'll only access memory that
they're supposed to and that they can't mess up other processes. So there's this BPF verification
step that checks, for example, that you always check that a pointer isn't nil before you dereference it
and if you try and use a pointer in your C code it might compile perfectly but when you come to
load it into the kernel it gets rejected because you forgot to check that it was non-null before.
You try and run it the whole thing seg faults you see the word fault there and well I guess
blameless just went out the window there.
Well, this is the thing.
You cannot seg fault in the kernel, you know, or at least that's a bad thing.
You say that, but I'm very bad with computers.
Let's be clear here. There's always a way to misuse things horribly enough.
It's a challenge.
It's pretty easy to seg fault if you're writing a kernel module.
But maybe we should put that out as a challenge for the listener to try to
write something that crashes the kernel from within eBPF, because there's a lot of very smart...
Right now, the blood just drained from anyone who's listening in the kernel space or the
InfoSec space, I imagine.
Exactly. Some of my colleagues at iSovel are thinking, oh no, what she brought on here.
What have you done? Please correct me if I'm misunderstanding this. So eBPF is a very low-level tool that requires certain amounts of braining in
order to use appropriately. That can be a heavy lift for a lot of us who don't live in those
spaces. Psyllium distills this down into something that is a lot more usable and understandable for
folks. And then beyond that, you wind up with isovalent that winds up effectively
productizing and packaging this into something that becomes a lot more closer to turnkey.
Is that directionally accurate? Yes, I would say that's true. And there are also
some other intermediate steps like the CLI tools that Brendan Gregg did where you can,
I mean, a CLI is still fairly low level but it's not as
low level as writing the ebpf code yourself and you can be quite in depth you know if you know
what things you want to observe in the kernel you don't necessarily have to know how to write the
ebpf code to do it but you've got these fairly low-level tools to do it.
You are absolutely right that very few people will need to write their own
BPF code to run in the kernel. Let's move below the surface level of awareness,
the same way that most of us don't need to know how to compile our own kernel in this day and age.
A few people very much do, but because of their hard work, the rest of us do not.
Exactly. And for most of us, we just take the kernel for granted.
You know, most people writing applications, it doesn't really matter if they're just using
abstractions that do things like open files for them or create network connections or
write messages to the screen.
You don't need to know exactly how that's accomplished through the kernel unless you want to get into
the details of how to observe it with eBPF or something like that. I am much happier not
knowing some of those details. I did a deep dive once into Linux system kernel internals based on
an incredibly well-written but also obnoxiously slash suspiciously thick O'Reilly book, Linux
Systems Internals. And it was one
of those like halfway through, we can't please be excused, my brain is full. It's one of those
things that I don't use most of it in a day-to-day basis, but it solidified my understanding of what
the computer is actually doing in a way that I will always be grateful for.
And there are tens of millions of lines of code in the Linux kernel. So
anyone who can internalize any of that is basically a superhero.
I have nothing but respect for people who can pull that off.
Couch Base Capella.
Database as a service is flexible, full-featured, and fully managed
with built-in access via key value, SQL, and full-text search.
Flexible JSON documents align to your applications and workloads. Build faster with blazing fast in
memory performance and automated replication and scaling while reducing cost. Capella has the best
price performance of any fully managed document database. Visit couchbase.com slash screaming in the cloud to try Capella today for free
and be up and running in three minutes with no credit card required.
Couchbase Capella.
Make your data sing.
In your day job, quote unquote, which is sort of a weird thing to say,
given that you are working
at an open source company. In fact, you are the chief open source officer. So what you're doing
in the community, what you're exploring on the open source project side of things, it is all
interrelated. I tend to have trouble myself figuring out where my job starts and stops most
weeks. I'm sympathetic to it. What inspired you folks to launch a company that is, ah, we're going
to be in the open source space, especially during a time when there's been a lot of pushback in some
respects about the evolution of open source and the rise of large cloud providers, where is open
source a viable strategy or a tactic to get to an outcome that is pleasing for all parties?
So I wasn't there at the beginning for the isovalient journey, and Cillium has been around for five or six years now at this point.
I very strongly believe in open source as an effective way of developing technology, good technology and getting really good feedback and
kind of optimizing the speed at which you can innovate. But I think it's very important that
businesses don't think, if you're giving away your code, you cannot also sell your code. You have to
have some other thing that adds value maybe that's some extra code like
in the isovelent example the enterprise related enhancements that we have that aren't part of the
open source distribution there's plenty of other ways that people can add value to open source you
know they can do training they can do managed. There's all sorts of different ways to support.
It was the classic example.
But I think it's extremely important that businesses have or don't just expect that,
well, I can write a bunch of open source code and somehow magically through building up a whole load of users,
I will find a way to monetize that.
A bunch of nerds will build my product for me on nights and weekends. That's a bit of an outmoded way of thinking about these things. Yeah, exactly. And
I think it's not like everybody has perfect ability to predict the future and you might
start a business. And I have a lot of sympathy for companies who originally started with the idea of,
well, we are the project leads, we know this code the best,
therefore we are the best people in the world to run this as a service, the rise of the hyperscale
cloud providers has called that into significant question. And I feel for them, because it's
difficult to completely pivot your business model when you're already a publicly traded company.
That's a very fraught and challenging thing to do. It means that you're
left with a bunch of options, none of them great. Psyllium as a project is not that old, neither is
isovalent, but it's new enough in the iterative process that you were able to avoid that particular
pitfall. Instead, you're looking at, in some level, making this understandable and useful to humans
almost to the point where it disappears from their level of awareness that they need to think about, there's huge value in something like
that. Do you think that there is a future in which projects and companies built upon projects that
follow this model are similarly going to be having challenges with hyperscale cloud providers or other
emergent threats to the ecosystem.
I'm sorry, threat is an unfair and unkind word here, but the changes to the ecosystem as we see the world evolving in ways that most of us did not foresee.
Yeah, we've certainly seen some examples in the last year or two, I guess, of companies
that maybe didn't anticipate and who necessarily has the crystal ball to anticipate how cloud
providers might use their software and I think in some cases the cloud providers have not always
been the most generous or most community-minded in their approach to how they've done that.
But I think for a company like Isovalent, our strong point is talent.
It would be extremely rare to find the level of expertise in, you know, what is a pretty specialised area.
You know, the people at Isovalent who are working on psyllium are also working on EBPF itself.
And that level of expertise is, I think, pretty unrivaled. So
we're in such a new space with eBPF. We've only in the last year or so got to the point where
pretty much everyone is running a kernel that's new enough to use eBPF. Startups do have a kind of agility that I think gives them an advantage,
which I hope we'll be able to capitalize on. I think sometimes when businesses get upset about
their code being used, they probably could have anticipated it. It's open source. People will
use your software and you have to think about it.
What do you mean you're using
the thing we gave away for free
and you're not paying us to use it?
Yeah.
Did you hear what you just said out loud?
Some of this was predictable,
let's be fair.
Yeah, and I think you really have to,
as a responsible business,
think about, well, what does happen
if they use all the open source code?
You know, is that a problem?
And as far as we're concerned, everybody using Cilium is a fantastic thing.
We fully welcome everyone using Cilium as their data plane because the vast majority
of them would use that open source code and that would be great.
But there will be people who need the extra features and the expertise that I think we're
in a unique position
to provide. So I joined iSurveillance just about a year ago and I did that because I believe in the
technology, I believe in the company, I believe in the foundations that it has in open source.
It's very much an open source first organization which which I love, and that resonates with me and how I think we can be successful.
So, you know, I don't have that crystal ball.
I hope I'm right.
We'll find out.
We should do this again in, you know, a couple of years and see how that's panning out.
I'll book out the date now.
Looking back at our conversation just now, you talked about open source and business strategy
and how that's going to be evolving. We talked about the company. We talked about an incredibly
in-depth technical product that honestly goes significantly beyond my current level of
technical awareness. And at no point in any of those aspects of the conversation did you
talk about it in a way that I did not understand? Nor did you come off in any
way as condescending. In fact, you wrote an O'Reilly book on container security that's written
very much the same way. How did you learn to do that? Because it is, frankly, an incredibly rare
skill. Oh, thank you. Yeah. I think I have never been a fan of jargon. I've never liked it when people use a complicated acronym
or really early days in my career,
there was a bit of a running joke about how everything was TLAs.
And you think, well, I understand why we use an acronym to shorten things,
but I don't think we need to assume that everybody knows what everything stands for.
Why can't we explain things in simple language?
Why can't we just use ordinary terms?
And I found that really resonates.
You know, if I'm doing a presentation or if I'm writing something,
using straightforward language and explaining things,
making sure that people understand the kind of fundamentals that I'm
going to build my explanation on. I just think that has a, it results in people understanding.
And that's my whole point. I'm not trying to explain something to, you know, my goal is that
they understand it, not that they've been blown away by some kind of magic. I want them to go
away going, ah, now I understand how this bit fits with that bit or how this works.
The reason I bring it up is that it's an incredibly undervalued skill because when people
see it, they don't often recognize it for what it is. Because when people don't have that skill,
which is common, people just write it off as, oh, that person's a bad communicator,
which I think is a little unfair. Being able to explain complex things simply is one of the most valuable yet undervalued skills
that I've found in this entire space.
Yeah, I think people sometimes have this sort of wrong idea
that vocabulary and complicated terms
are somehow inherently smarter
and that if you use complicated words, you sound smarter.
And I just don't think that's accessible and I don't think it's true.
And sometimes I find myself listening to someone
and they're using complicated terms or analogies that are really obscure.
And I'm thinking, but could you explain that to me in words of one syllable?
I don't think you could.
I think you're hiding, not you.
No, no, that's fair.
I'll take the accusation as legally as I can get it.
But I think people hide behind complex words
because they don't really understand them sometimes.
And yeah, I would rather people understood what I'm saying.
To me, I've done it through conference talks,
but the way I generally
learn things is by building something with them. But the way I really learn to understand something
is I give a conference talk on it because, okay, great. I can now explain Git, which is one of my
early technical talks to folks who built Git. Great. Now, how about I explain it to someone who
is not immersed in the space whatsoever. And if I can make it that
accessible, great, then I've succeeded. It's a lot harder than it looks. Yeah, exactly. And one of the
reasons why I enjoy building a talk is because I know I've got a pretty good understanding of this,
but by the time I've got this talk nailed, I will know this. I might have forgotten it in six months
time, you know. But while I'm giving that talk, I will have this. I might have forgotten it in six months time, you know. While I'm giving
that talk, I will have a really good understanding of that because the way I want to put together a
talk, I don't want to put anything in a talk that I don't feel I could explain. And that means I
have to understand how it works. It's funny, this whole don't give talks about things you don't
understand. It seems like this really nouveau concept, but here we are. We're working on it.
I mean, I have committed to doing talks that I don't fully understand,
knowing that, you know, with the confidence that I can find out between now and...
I believe that's called a forcing function.
It's one of those very high risk stories.
Either I'm going to learn this in the next three months,
or else I am going to have some serious egg on my face.
Yeah, exactly.
Definitely forcing function.
I really want to thank you for taking so much time to speak with me today.
If people want to learn more, where can they find you?
So I am online pretty much everywhere as Liz Rice,
and I am on Twitter, I'm on GitHub.
And if you want to come and hang out,
I am on the Cilium and eBpf slack and also the cncf slack
yeah so come say hello and we will put links to all of that in the show notes thank you so much
for your time i appreciate it pleasure liz rice chief open source officer at isovalent i'm cloud
economist cory quinn and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform of
choice. Whereas if you hated this podcast, please leave a five-star review on your podcast platform
of choice, along with an angry comment containing an eBPF program that, on every packet, fires off
a Lambda function. Yes, it'll be extortionately expensive, almost half as much money as a managed NAT gateway.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group. We help companies fix their AWS bill by making it
smaller and less horrifying. The Duckbill Group works for you,
not AWS. We tailor recommendations to your business and we get to the point.
Visit duckbillgroup.com to get started. this has been a humble pod production
stay humble