The Infra Pod - State of WASM beyond the browser with Steve at Dylibso
Episode Date: June 5, 2023WebAsseembly (WASM) is growing immensely now beyond the browser, and now everyone is talking about how / when the adoption will happen everywhere else. Ian and Tim sat down at Steve (CEO of Dylibso) ...that worked on WebAssembly in multiple companies and now building a developer tools company around WebAssembly.
Transcript
Discussion (0)
Welcome back to the pod.
And I guess I didn't even know what the pod name is.
So we'll just say welcome back to the pod.
I'm Tim.
Just quick intro.
I started Essence VC.
Hey, Ian, I'll let you introduce yourself.
Hey, I'm Ian.
Started some companies, done some investing, currently helping Sneak figure out its platform strategy.
How do we turn that thing from a bunch of tools into a platform?
I'm super excited for us to have Steve on the show today, CEO of Dylibso, to talk to
us about Wasm, all the great things, all the rough edges, and more.
Steve, could you please give us an introduction to who you are and what you're up to?
Absolutely.
Thanks for having me on.
I'm Steve Manuel, CEO and co-founder of a small startup
called Detlibso. We focus on helping developers take WebAssembly to production and keep it there.
We can get into more of the details about what it means to actually keep it in production.
We've been working on a couple of different projects. The first was an open source universal
plugin system called Xtism, which is the easiest way to load WebAssembly code into
your existing app and call WASM functions. And then the second product that we just launched
is called ModSurfer, which is kind of a system of record that gives you critical insights in
your WASM code that you may not see at surface level and a bunch of different operational tools
to make use of that data once it's loaded into ModSurfer. Amazing. So why did you one day
say, I got to start a company in this space? It's like a pretty bold statement to be like,
this is the future. You're betting big on it. It's super interesting. Like, why are you here?
Why are you doing this? Because I think that will help us really understand, like, why is it so
important? Sure. It goes way back to me just being firstly a programming language nerd and just
loving the intricate details of different languages and how their primitives are established and patterns are created down to how do you parse that language and how, a language to execute inside of the browser for general-purpose computing.
And in doing so, it had to be designed to be very low-level and compact, to run in a variety of different places.
Not everybody has M2 Mac on their desk.
Lots of people are running browsers still on very restricted, small compute environments.
And so it needed to be better than JavaScript in terms of its ability to be parsed and executed.
Needed to be secure.
Being in the browser is the most hostile environment I think we know.
Or code can be loaded from any variety of endpoints on the internet and executed inside your browser has access to potentially some sensitive information on a page.
And so the execution environment needed to be secure.
And the code needed to be portable because there's lots of different browsers out there.
And they're on different machine targets and different CPU architectures.
And they run on different operating systems.
This combination of a new language to run in the browser, being secure by default, and being portable, knowing that my code can run in a variety of different environments, really got me excited.
At the time, I was dealing with a lot of Docker, Kubernetes things where I got to build an app, containerize it, make sure that the container is ready to go for our x86 servers.
And then all of a sudden, we find ARM is more cost effective, and we're going to move the whole stack to86 servers. And then all of a sudden we find ARM is more cost effective
and we're going to move the whole stack to ARM servers.
We've got to recompile all of our containers
and ship them back to the registry.
And just like kind of a headache and thought to myself,
well, let me simply solve some of that.
And like, it's interesting enough to go explore.
But I really remember the first time I thought like,
what is going on here?
I need to dive really deeply.
I was working on a dev tool and it was written in Go. Developer tools, typically a little CLI
application that's running the terminal. And it produces output that gets flushed to standard out
and thought, I want to build a demo website to kind of showcase this thing. And Go had recently
added support for JavaScript specific environment to execute WebAssembly and target Wasm as its
binary output. And so I thought, okay, I can compile this thing to Wasm and try to run it in
the browser. And that was a little difficult, but it got it to work. And the first time I saw that
same output that normally had been flushed to standard out in my terminal in my browser,
I just was floored. It was that point on, this was in like 2018, that I thought, I got to do something here. The story continues, and I'll go a little more quickly through it, in that I joined Cloudflare and was working on the Rust was a really great candidate at the time, being one of the best languages for support to target WebAssembly.
I personally am a big fan of the Rust programming language. And so I thought,
I'm going to build a little framework so that I can compile my Rust code to Wasm and seamlessly
link to all the different APIs that Cloudflare and also the web platform provides to the workers platform.
That was a more challenging experience
than getting that one little dev tool
to compile to Wasm and run on the browser
and illuminated a lot of the things
that led to me realizing a company needs to exist
to focus on these problems,
to bring the solutions to market
and give developers the level of maturity
that they deserve and expect out of their tooling,
especially for something that is becoming
so dominant as WebAssembly is.
That's an interesting story about your jump
from a CLI tool to Cloudflare.
And then one more hop, right?
Quantum computing, you're building compilers.
Yes.
How has that influenced production?
Because that production is not going to even be
in production for a while.
But what is the part that also toolchain
kind of influenced your new company as well?
Yeah, so a pit stop between Cloudflare
and starting Delibso was working on compilers
for predominantly focusing on the LLVM toolchain.
And the overarching goal was to try to blend
into a single executable quantum instructions
that would be executed on a quantum processor
located somewhere else in the
world addressed over the cloud and locally on the CPU that is actually executing that binary.
And so I was working with a team, a group of individuals from a bunch of different companies
in space to design an intermediate representation that would effectively capture the minimum set of
quantum instructions that could be executed on a variety of hardware backends,
and create a compiler that would translate those instructions to the ones that our hardware would natively be able to execute, but also then, like I mentioned, blend together the readout data from
the quantum execution into a classical program and feed that data back into a quantum program,
so you have this tighter loop of execution versus running some quantum
code, getting the result back over the cloud, putting it into another program, executing that,
and then sending more information back to the quantum computer. Being able to blend those two,
co-locating those execution and reducing the gap between kind of a ping pong back and forth
with the quantum processor and the classical processor, It was all just to get a speed up in the compute time.
But it's challenging to work at that low of a level with intermediate representations
from different languages and seamlessly kind of combine them and ensure that the program
is still correct and still does the original intent.
Being at that level and working with LLVM was really interesting because LLVM is actually
one of the most popular paths that most high-level languages take to compile down to WebAssembly as the file backend architecture.
It kind of gave me this insight into, first of all, using WebAssembly is much easier to blend two languages together, to take some code.
As long as I can compile to Wasm, I can call that code from my other WASM code. And that was an inspiring benefit just based on the challenge that I had been in getting
these two different environments to link together.
So yeah, it just kind of gave me this last bit of confidence.
Okay, I understand these tools.
I understand why WebAssembly has a benefit here and an edge here.
And I think that that's only going to evolve into something bigger.
And being able to kind of see a glimpse into the future is also a really nice thing to have as a startup founder, being able to see what are some of the
problems that the rest of the ecosystem is going to face? And can we help solve some of those in
advance and be there to help others before they fall into the trap that we fell into?
You're actually already covered. What we partially want to ask is what is Wasm?
To put in simple words, and maybe Steve, you can help us as well.
Wasm, as you mentioned, has a single language that can actually be able to compute and translate into physical hardware or architecture differences, just like how JVM and all these other languages
have worked before.
And we're all very interested.
Wasm is like the hottest thing in the infrastructure, right?
Everybody's talking about it.
Everybody's looking at it.
But we also have a lot of questions.
This is not just like Docker.
This is just not like Kubernetes.
This is a language.
This is going to be a lot more different journey than we've seen in the last few revolutions, I assume.
So first question, of course, you started a company around this, right?
You're betting on Wasm to be mainstream, somewhere in the short term, medium term.
How do you tell developers that Wasm is actually really interesting today to use?
I think it's very use case dependent.
Obvious use cases where the benefits are very clear.
If you have, for example, a small program that you want to deploy into a cloud environment
or wherever, and minimize the burden on the developer or the operator of that platform
to ship that code and deploy that code and run that code.
As long as your platform has a WebAssembly runtime and you can effectively map ingress, you know, HTTP or whatever inbound to running that function, WebAssembly binary that does effectively the same feature
that a equivalent Docker container
or containerized program would do,
you're dealing with an order of magnitude
or less in many cases,
binary size of the artifact to deploy.
That means I can pack way more programs
and functions and stuff
onto the same set of resources
that I could to contrast with like a pod in
Kubernetes land, right? So size and resource usage is a huge benefit. The other is startup speed.
Unlike the JVM, if you want to contrast it with that, which takes some time to start and it takes
some time to JIT the Java bytecode and then execute that Java bytecode, WebAssembly has an
incredibly predictable and consistent startup experience where the cold start time, which is the term we all know for containerized
kind of serverless world, is next to zero. We're talking about sub millisecond cold start times to
actually get your function running. And that can mean the difference of, in aggregate, running three
or four functions is kind of like the waterfall approach from a front end to execute a few different things on the back end, where containers could add
up to seconds of startup time.
And WebSystems can start up in milliseconds.
I forget what exactly the stat is from Amazon from years ago, but every second is 1% of
sales or something like that lost. And so it's very meaningful for systems who are
doing real things, selling products or whatever, where startup times is critical.
So from a serverless perspective, from kind of a functions invoking code in reaction to
HTTP triggers or message queues and things like that. Web assembly has a very strong story there.
I think one that's told maybe less often that we champion is the use of plugins. So if I have a
program and maybe my program is written in Go, you have a limited set of options to have a kind
of extension system in that program. You want it to do more, your customers want it to do more,
but your backlog is already too full and you're never going to get to those little features that your customer is asking for. And also, how do you ever predict
exactly everything that a customer is going to want your program to do? It's impossible. So
plugin systems are popular to allow customers to extend the functionality of your thing.
But most of the time, you're kind of handicapped in the options you've got. You're sacrificing
performance for safety in every different option you pick.
Either you're going to shell out to a binary on the system. Who knows what it's going to do?
That binary could be anything. You're not even sure it's what you're calling. You might load
a shared object. Well, if I load that, I get great performance. I sacrifice security because that
code now has full access to my code. Maybe I call out over a network, it'll hit a microservice
somewhere. There's latency involved
there. And so WebAssembly actually provides the best of both worlds in that it is a secure sandbox
runtime. I can embed into my program and I can execute code that has been written in a variety
of different languages. It's not dependent on the language that my program is written in,
or maybe that I'm imposing on my users,
it's very common to embed Lua or JavaScript engine inside of a program. And at that rate,
you're prescribing how your user has to interact with your program at the plugin level versus
giving that end user the language of their choice and letting them compile to Wasm.
Wasm also has, like I mentioned, very predictable and consistent execution speeds, in many cases near native, and in some cases, better than native, depending on
the language you're in. For example, if I'm in an interpreted language like JavaScript or Python
or Ruby, it's not uncommon for code in WebAssembly to perform better and faster than the language
it's embedded in. I think that is also a really strong, interesting use case as well. That's great. There's a whole field of stuff to hit on there. I mean, you hit
on like portability, common bytecode, what that means from a portability standpoint, you hit on
like extensibility and WASI. I want to step back for a second, though, before we go deeper into
like, let's define from your perspective, what are the success stories today of WASM from like
production companies using WASM to power like production workloads?
And why are they successful?
Where are we at in terms of like Wasm's march to like production use cases and why?
I'll use two use cases here on kind of different ends of the spectrum, both of which have to do with extending a platform's capability for its end users, but in very different ways of implementation.
The first is with the Shopify Functions product.
This is kind of the next evolution of their platform.
For those who don't know who are listening, Shopify is the biggest kind of online storefront
platform to sell products on the internet.
And Shopify historically had APIs, which developers can use to sort of
extend the functionality and customize the platform. They've had an app product where
developers can kind of ship their own integrations into Shopify to make it do more than it's designed
to do. But they were always limited in like the depth into the platform in which that end user's
code could interact with Shopify. And that's changed now that they have adopted WebAssembly
and also, by the way, have really pushed forward
the capabilities of WebAssembly,
especially through the LASM time runtime.
We owe a great debt of gratitude to Shopify
and the team there for helping really push the needle forward
in a number of ways.
And Shopify's function product allows for a developer
to basically inject arbitrary logic through a number of steps. And Shopify's function product allows for a developer to basically inject
arbitrary logic through a number of steps in the checkout flow or the product creation flow.
So for example, if I have a user who's adding $100 worth of product to their cart,
I can get that number before they check out and offer them a discount if they add another $20
worth of product. And for merchants who have a very distinct set of products
or have a variety of different needs or requirements
for their products and their pricing,
it becomes impossible to try to create a configuration page
or something for this kind of a process.
So how many forms or checkboxes and buttons
can you actually add to give users the
configurability? Well, it's much better expressed in language and code. And so what this product
allows for is a developer to say, hey, you know what, I'm actually going to write some code that
says, give me some data from Shopify, give me the cart object tells me maybe who the user is,
how much money the value of the cart is, what products are in the cart,
and inspect that, maybe enhance it with some of my own data that they can pull in data from
elsewhere, and construct a new cart object that I return back to Shopify to treat as like,
this is where you should actually end the sale and charge the cart.
And so people are building really interesting things and adding new functionality to this
platform that has already served a number of merchants incredibly well, but now the extensibility is just at a new level.
The other use case is in an embedded environment inside of a database.
There's a company out there called SingleStore, and SingleStore has a very sophisticated implementation that gives database developers the ability to express queries in code
that is not SQL. And so I can ship a WASM module inside my database and interact with the data in
the database. Instead of writing a query to pull that data out, move it into my application,
iterate through the data, change the data, manipulate the data, and put the data back
in the database. With single store, you can actually write a query in languages like JavaScript and Rust
and Go, compile at the WebAssembly, and more comfortably express the problem that you're
trying to solve in a language that you know.
And the other interesting element of this is that there's this whole movement to try
to bring compute to data instead of bringing
data to compute. It's very expensive to move data around a network. Bandwidth and egress fees can be
crazy on different cloud providers. And so to be able to instead ship two megabytes of web
assembly code to your database and operate on the data there compared to shipping 10 gigabytes of
data to your application in a container makes a ton
of sense from a cost perspective. And so I think we've really just started to scratch the surface
there on where do we see embedded compute inside of data projects that largely is implemented using
a WebAssembly runtime inside that database. Amazing. I mean, that's why I've been so super
excited. Both of those are great examples of that polar opposites, one, the embedded database, and then the other side of like, there's the existing platform that wants to be extended.
Previously, the ways for us to extend platforms were like, I have to go build an API service, and I have to like, at rest API, and there's OAuth, and there's a whole thing I have to run.
It's got to be fast. Scroll back to 2005, state-of-the-art was Salesforce is so cool and their Apex platform and all of that, which most developers look back today and say, oh, that's just terrible.
But now with Wasm, we have these new ways to extend these platforms.
Also, I think your point around like compute, bringing the application logic to the data is super interesting, the single store example.
We've also seen these other types of use cases for Wasm. For example, Fermion with this,
like we're going to build a full app as Wasm.
Help us understand what's the idea here.
Because I feel like that is maybe in the middle.
It's a developer-focused story.
You're going to build your app using entirely Wasm.
Why is that desirable?
What's your understanding?
It would be great to understand that as well.
Yeah, shout out to the team at Fermion.
They're doing awesome stuff.
Their runtime called Spin just hit 1.0.
So major accomplishment and exciting to see
the great progress being made there.
Fermion and other clouds like Cosmonaut, Fastly, Cloudflare,
have WebAssembly runtimes that sit basically behind
an ingress point in a cloud environment.
And so you send an HTTP request, and in the configuration of a service is basically a trigger that says,
okay, when this route is hit, execute this function that I've defined in WebAssembly.
And I touched on one of the benefits earlier, which is about the size of the artifact that's actually deployed and executed. Tremendously smaller artifacts
shipped to these environments, which reduces cost, reduces the amount of RAM necessary to execute.
And then therefore, you can pack more WebAssembly code onto a single instance of whatever you're
running, whether it be a VM or a container or whatever. So there are cost benefits. The other
is the startup speed. So if I'm building a purely event-driven function
as a service architecture,
the time to actually execute the function
is not impacted as much by its startup time.
And you also want to keep resources warm
to ensure that the container is ready to serve traffic.
It can be shut down, scaled to zero.
It's very effective that way.
The other thing that from a developer's perspective
that I think is underappreciated
or maybe yet to be appreciated
is the idea of WebAssembly's import and export interface.
Everything in WebAssembly effectively boils down
to the definition of a ABI, application binary interface,
that allows for functions to be called
from the Wasm module, given to it from its host to be called from the WASM module,
given to it from its host, and functions from the WASM module that are called by the host.
The first are the imports, code that I get from my host environment, and the second are the exports,
functions that I provide my environment. In a platform like Affirmion, there are ways to simplify the operation and integration of services with resources.
So we've all run web services that talk to a database that have to probably load some bespoke
Go ORM or Postgres driver that knows the wire protocol, knows all the different events and
messages that are sent between the
application and the database. That library is different for every language. Some of them have
different bugs, some of them have different constraints. And so it can be a hassle for
developers who are working in environments where I've got my Go program and it's talking to
Postgres and I've got another container in the same cloud, a Rust program that's talking to MySQL and a Python program that's talking to Redis.
Well, instead of having to literally connect to a database in the application layer and have all of that plumbed through as language-specific library code, I can rely on an import to provide me the capability to communicate to a database.
So I'm still going to write my
SQL code if I'm talking to Postgres in Postgres-flavored SQL. I'm still going to write
my SQL in MySQL. But I don't have to actually manage the connection. I don't have to actually
understand, okay, my library in Go needs to be able to communicate with this particular database.
Instead, I have an abstraction between my application code and the resource that it's going to talk to.
And that's handled at the host layer.
So, Firmion or Cosmonaut or whomever will actually provide a contract to you that you can just use.
So, it might be KV.
It might be RDBMS.
It might be S3 or cloud storage. those APIs, you can just program against as if they are built into your application and abstracts
away the difficulty of having to connect to that database or manage that resource, or even frankly,
like spin that resource up. That resource can already be available to the developer on that
cloud platform. And so the integration and operation of working with cloud resources
from these applications can be dramatically simplified.
That's pretty incredible.
When I sit back and think about what you just said,
I kind of think of it as like, since it's the 1970s, right,
the abstraction we've all worked with is the POSIX process, right?
And even what Docker did was allowed us to ideally bundle up a process with all of its runtime configuration and all its operating system dependencies
and kind of ship and move it around.
And we didn't have to spend as much time with Puppet and imaging VMs.
And that was great.
What Kubernetes did is make it really easy for us to orchestrate all those processes.
And what I'm hearing from you across all these use cases is basically what we're saying is
the abstraction that developers are building against is actually moving up to the function.
And that's being standardized because we have the shared runtime, it gives portability,
and this WASI and the ability for us to create sort of standardized, let's say, interfaces
for different types of functions.
And now we have a whole new layer of orchestration that will emerge, which is orchestrating functions,
which in many ways is kind of like saying, I say this in jest, but also not entirely,
is we really are kind of emerging to the point of this in jest, but also not entirely, is we really are kind of
emerging to the point of like, well, the next layer of compute or the way we build our apps
is more about orchestrating functions, which is what we said fast was, you know, serverless was,
but this is actually the way that we get there. That's how I think about it, but I'd love to get
your perspective. I 100% agree. The one caveat is that reducing it down to the function in all cases is probably too
low. And full applications can absolutely be built that are composed of multiple functions.
We don't need to necessarily treat WebAssembly as just this single function, run it as my
microservice. Large applications can be built. And in fact, in other contexts outside of the cloud,
we're already seeing this. And again, this was like kind of the initial WoW demo. The predecessor to WebAssembly
was a technology called Asm.js. And you had, you know, Unreal Engine game, a full like first person
shooter, you know, amazing game compiled from what would otherwise run on, you know, a desktop
environment or a console in the browser and had incredible
graphics, super smooth, great interactivity, could be networked. That is a full application
of great sophistication that has been compiled to WebAssembly and is running in the WASM runtime.
And so while, yes, a function as a service, as the unit of deployment for WebAssembly is a great
story, I think there's
still, you know, a whole page that we're just turning that's going to show how sophisticated
applications can be and still be compiled to this very compact binary format.
This is going to be a biased question, but do you think the future of programming is most of
our apps compiled to Wasm and then we're just deploying Wasm modules? Or how do you think
about the future of application development and involving Wasm modules? Or how do you think about the future of application development
and involving Wasm? I think it's dependent on the adoption curve, you know, accelerating a little
bit. And I think that we have a responsibility here to improve the status quo in order for that
adoption curve to be met, to see the future that I think WebAssembly can provide for us. And I think
that there's a middle ground that is a certainty, which is WebAssembly can provide for us. And I think that there's a middle ground that is a
certainty, which is WebAssembly will be ingrained in the world of compute forever moving forward as
a bridge to be able to take code from one language and compile it into Wasm and load it into a
program of a completely different language and do that easily, safely, and with still really good
performance. And to see partial applications or partial bits of your full architecture
implemented in serverless using WebAssembly where it makes sense.
I do believe that there could be a future where WebAssembly
is the dominant architecture in which programs are compiled to.
And we can reduce those programs down to the function
so that we have this infinite mesh
of functions to pick from when we need them. It doesn't matter they're compiled from Rust or from
Go or from Python or from Ruby or wherever. As long as it satisfies the need of this particular
program, I can dynamically link together and compose hundreds of functions from all over the
web or wherever I'm orchestrating code from and execute a program that meets the needs of its particular environment and runtime
that potentially could be composed at that runtime.
That's pretty neat.
I mean, it sounds kind of science fiction-y,
but if you think about it, having this consistent environment
and the same instruction set,
and we have to kind of smooth out some of the rough edges
where it comes to a host runtime having a very specific set of imports and exports that it expects to have.
But having this consistency across the different compute environments really provides for the ability to say that I've got some code that I want to distribute across a number of different endpoints based on the computation needs of the program.
So maybe I run a little bit of code on my watch
and I run a little bit of code in the browser.
I run a little bit of the code at the edge
and a little bit of it in the data center,
all of which are satisfying the same program,
but we're distributing the compute
across wherever it actually makes sense.
So maybe some of the data comes from the webcam
of the desktop and some of it comes off a factory floor
and a sensor and some of it is being pulled
out of a database that's stored in the cloud all together merge into this one kind
of cohesive program that the compute is actually distributed across a number of different platforms
but it's the same code because it's in web assembly and the same runtime can execute
that code independent of you know where they are running to me that's kind of the future
i think that could be a very awesome future to live in,
but we do certainly have a ways to go before we get there.
Yeah.
So I think that's an interesting way to paint a picture for sure.
And that's what everybody is believing, right?
We believe at some point everybody will use Wasm.
Everybody should just use Arm or something like that.
The ways to go is probably what I want to hone down to.
And this is probably the hardest question to answer right now at least for me too because i
worked on docker and kubernetes for quite some time i worked on mesos i was made one of the
maintainers for that and seeing that layer evolve it's very different how wasm evolves you know
because i put some prs of awesome time you know I've been figuring out what's going on in that layer too.
And it's obviously very different because it's a language, right?
It's a runtime.
And the specification is still being discussed every quarter, every week of small things added to the spec.
It's almost feel like this is back in the day when we're trying to do something in a C groups level as a Docker.
Anything like that nature, right?
Any namespace stuff,
you just wait for six to nine months, right?
Waiting to finally get into the kernel.
Finally, you know, Linus finally says, okay.
And just like go rounds and about.
At some point, one day it will be available.
I think it's just hard to move super fast in Wasm
when you are having a specification here.
And then you have to figure out
how the rest of the tool chain goes.
So what do you see is the next frontier
of folks going to be using Wasm in production?
Is it all plugins and databases?
And I'm curious what you see is coming next.
What are things we're going to unblock right away
that will have a new production use case?
Because that's quite hard to figure out right now.
Yeah, I completely agree.
And I think one of the biggest chunks
of kind of specification that is being worked out
that will have the largest individual impact
on that future that we're describing
is what's known as the component model.
It is an evolution of another spec
that was dropped called interface types
with the ultimate goal of being able to take
the WebAssembly module or a component, which is effectively like a sub-module, and link
it to another component or module while knowing and understanding the interface between those
two so that the interoperation between those modules from language A and language B is
effectively seamless.
And you can kind of think about this as an IDL, like protobuf, if you're familiar.
You define messages in a gRPC environment, define services in which those messages can be consumed and sent.
And it's a very similar idea with the component model in that you'd have a descriptive IDL that talks about the types and the function signatures.
The component model describes a way to then encode
and decode that data into LASM memory,
and then a way for the modules to know how to interact
with each other using that kind of common known format.
And it's still very early,
but there's already quite a bit of support
in a variety of languages that can target WebAssembly.
But it's really going to come down to a general
agreement across a bunch of different ecosystems that this is the model to adopt and to push
forward.
And there isn't agreement yet in all ecosystems that it's the right way to go, or at least
the only way to go.
But unfortunately, when you're dealing with this level of interop, it kind of needs to
be the only way.
Otherwise, things clash and don't meet up and don't align in the way they need to for function in Rust
compiled to LASM to call function in Go compiled to LASM and make it easy for developers to
use all these components in their programs. So I think that the component model, once solved and
agreed upon and shipped in a kind of final form form will be a huge element of the answer to that question of what kind of still needs to be done.
And in the meantime, people are solving this problem in their own flavor.
They're picking a different IDL to kind of generate bindings for the host and the guest code to interop.
Or they're bringing in different serialization format to share data between modules. That is leading to a little bit of
fracturing in the ecosystem, but it's still very early. And I think that the best ideas will rise
to the top. And ultimately, I talked to a lot of people in the ecosystem, and the general consensus
is like, once the component model is ready, we will migrate to it.
And I think that's the take
that many people who are working on things
in the WebAssembly ecosystem all agree.
Okay, once it can actually solve my needs,
we will use it.
But still updates every day to these specs
and teams are working really hard
and they understand the impact
of making a decision like this
because it's going to be around for a long time
and you can't take it back
once you put it out there. Yeah, this reminds me actually c group v2 or v3 one of those right it took forever
like three four years to finally get into you know mainstream kernel and docker finally able to ship
but i think the key for you because i noticed in exism you also built your own serialization
right arguments between different languages so you will actually make that work. So I think every project or vendor,
however you call it,
has to make something work now, right?
Without waiting for a component model
because that component model
will land next year, three years,
hopefully sooner than that,
but definitely not in two months, right?
I don't think a working group
like this works fast
because you get to have consensus,
arguments, everybody's talking about it.
So I think at least Dylips, so you're taking on a path that we're going to build plugins
and then we'll build debugging tools.
And that's one major key unlock for users and developers, key productivity.
What is the thought process here?
Everybody is taking a little different direction.
Some are building frameworks, as you mentioned, spins the world.
You're building the tools.
How do you see the tools like yours be adopted?
Are you looking for particular places or markets like database developers or Rust people to try to get in, let them use it right now?
Or what are the key early adopters, I would say, that's fastly growing
that we're not even aware of that you're seeing traction for? We focus on four kind of core
verticals of usage in WebAssembly. And one of them that we've talked a lot about is serverless.
Lots of people are finding interesting ways to integrate this into their stack. The second is
plugins, which we are largely kind of pushing that forward with projects like Xtism.
The third is browser technology, which, again, is the original home for Wasm.
Still can be very difficult to integrate with a web platform, but there's a tremendous amount of usage there.
And for really large companies, porting applications that were previously only for the desktop or only for mobile or whatever and bringing into the browser.
And the last is Web3 and blockchain.
Most blockchains that have a smart contract platform
actually execute those smart contracts as WebAssembly components and modules.
So between those four verticals, we're trying to be agnostic at first
to all four of them and provide tools that are primitive
and useful to developers in every category to, you know,
kind of get out there and talk to companies in these verticals. It's a shout out if you're
listening and you're within any of those categories or others, and you're having problems, please talk
to me. I'm Nils Leifstein on Twitter or on GitHub, or you can email me steve.leifstein.com.
And over time, develop more vertical specific tools and software and solutions that
help developers or companies adopt LASM in those core verticals. We focus on those because that's
where we've seen the most adoption and uptick in interest and real production usage. But you also
have IoT, embedded database stuff like we were talking about. Every single one of these verticals
has its own problems to solve.
So we are really trying to firstly take this broad
and primitive kind of low level approach
to the tools that we're building
that are agnostic to any runtime in particular
or any use case in particular,
but are like the level of like a Git
that you would need to like write code and version code.
Like that's applicable to everybody who's writing code.
And so that product I mentioned in the very beginning of the episode called ModSurfer
is really a visibility and debugging and code management tool.
I have a bunch of LASM code.
What's inside of it?
How can I debug a mismatch between my imports and exports?
Can I, as an operator, search a huge database of modules for one that has this
particular function or that imports from this particular namespace?
And then for folks like the CISOs and CTOs who are responsible for staying within compliance,
ModSurfer has this auditing feature where I can actually audit my entire database of
modules for modules that are reading from the environment or calling the get environment
function from the W WASI namespace,
and therefore can indicate that maybe this module
has access to sensitive data that it shouldn't,
and therefore we can fix that problem.
And so we're really thinking about,
once WebAssembly is in use,
how do we make sure that people have the tools they need
to keep it in use and give operators the ability
to understand those systems
and working on observability
tools that are agnostic to any runtime that you can compile your WASM code and get real-time
feedback about monitoring and function calls and memory allocations. So everything that we do,
we're trying to take from a platform agnostic approach so that it does not preclude a browser
user from having the same access to great tooling that happens to be applicable to a serverless user.
We want to have consistency across every single one of those verticals.
And I think that's one of the really big opportunities with WebAssembly is that you have this
consistent instruction set and architecture for every single WASM program that you build.
And therefore, if the tools work with the WASM binary and the instruction set itself,
then it's independent of whoever is using it
in which vertical they can benefit from that tooling as well.
In your day-to-day talking to developers,
what's the wall they hit against
that consistently causes pain for them?
If I'm a first-time user trying to build some code,
maybe I'm trying to use Shopify functions.
Maybe I'm trying to just
build a plugin for Envoy, by the way.
Shadow to Envoy, which is amazing.
Network stack for routing traffic.
Like, where's the pain today?
Like, how sophisticated of an engineer
do you need to be to get this stuff to work?
And how far down the road can you get
before you're kind of digging
into the deep details of, like,
you know, a GitHub code base that you may or may not want to ever look inside?
You can get pretty far.
I mean, I feel like it's improved so much over the last couple of years
with the proliferation of runtimes and platforms that have been built
to kind of special purpose the use of WebAssembly,
whether it be in serverless or maybe it is a web framework
that takes your
Rust code and runs it in the browser. Where things start to fall apart and where we see a lot of
friction is when the developer doesn't realize the implications of something like WASI and what it
actually means to take like a standard library component or a system resource that is abstracted
into the language, like a thread or a spawning
another process or reading or writing a file, these boil down to system calls to the operating
system.
It's abstracted to you in the language.
Most of the time, your standard library comes back with a file bot open call.
Well, that's just Go code or just Rust code, but you peel behind the layers and actually
that's an F open and running
a system call to the kernel. And we don't have a kernel. We don't have system calls in WebAssembly.
And so therefore that code out of the box is not going to work. And that's what a specification
and set of libraries and functions called Wazzy comes in to provide kind of a shim between your
code that expects to be able to call some
system call, whether you know it or not, and bridge that through the runtime into the actual
operating system itself. And developers are understandably coming up to speed. This is a new
thing for many people that you can't just take any off the shelf library from crates.io or whatever
package management system for your code is and
expect it to just work and you compile to WebAssembly. It is a unique target. It's a unique
instruction set. And so therefore you need a tool chain and sysroot and a whole other collection of
compiler level things that make your code work in the WebAssembly environment. And sometimes those
things are just flat out unavailable. And that is, I think, the biggest blocker where it's like, okay, I'm back to x86
now. Can't do asm right here. But it is absolutely possible for all of those things to eventually
work. We just need to continue putting in the time and effort to make those implementations
generic enough and align with the way that the WebAssembly ecosystem wants to move.
We consistently hear that from users. It's like, why can't my time library compile?
It's like, well, there's no clock.
So where'd you get your time?
You know, which is a weird thing to say, but it's true.
Yeah.
Yeah.
It definitely feels like there's so much promise.
There's so much amazing value you can get, but yeah, you don't want to be deaf
by a thousand paper cuts or some some
analogy here yeah but this is super helpful because i think we're all trying to figure out
where the state of the land is and how do we peak open the ecosystem a bit more to figure out where
it's going to head so where do people find you and also learn more about dilipso and all your
products sure head to dilipso.com you can find our products they Sure. Head to the deadlibsode.com. You can find our products there,
currently limited to Xtism and ModSurfer.
We'll have more to announce in the coming months.
I'm also available on Twitter at Nilslice,
as well as on GitHub.
Please open issues on our repos.
We'd love to hear from you.
And then Xtism has a very rich Discord
with lots of great people experimenting
and building stuff.
So join the Discord. You know,
we're happy to have you there and nerd out WebAssembly in general.
Come join the fun.
Awesome. Thanks so much, Steve, having our pod.
Absolutely. Thanks for having me.
Thank you so much.