The Changelog: Software Development, Open Source - Containerizing compute driven workloads with Singularity (Interview)
Episode Date: February 28, 2019We're talking with Greg Kurtzer, the founder of CentOS, Warewulf, and most recently Singularity — an open source container platform designed to be simple, fast, and secure. Singularity is optimized ...for enterprise and high-performance computing workloads. What's interesting is how Singularity allows untrusted users to run untrusted containers in a trusted way. We cover the backstory, Singularity Pro and how they're not holding the open source community version hostage, as well as how Singularity is being used to containerize and support workflows in artificial intelligence, machine learning, deep learning, and more.
Transcript
Discussion (0)
Bandwidth for ChangeLog is provided by Fastly.
Learn more at Fastly.com.
We move fast and fix things here at ChangeLog because of Rollbar.
Check them out at Rollbar.com.
And we're hosted on Linode cloud servers.
Head to Linode.com slash ChangeLog.
This episode is brought to you by Linode, our cloud server of choice.
And we're excited to share the recent launch dedicated CPU instances. If you have build boxes, CI CD, video encoding, machine learning,
game servers, databases, data mining, or application servers that need to be full duty,
100% CPU all day, every day, then check out Linode's dedicated CPU instances.
These instances are fully dedicated and shared with no one else so there's no cpu steal or competing for these
resources with other linodes pricing is very competitive and starts out at 30 bucks a month
learn more and get started at linode.com slash changelog again linode.com slash changelog All right, welcome back, everyone.
This is the ChangeLog, a podcast featuring the hackers, the leaders, and the innovators of software development.
I'm Adam Stachowiak, Editor-in-Chief here at ChangeLog.
Today, Jared and I are talking to Greg Kurtzer, the founder of CentOS, Werewolf, and most recently, Singularity, an open-source container platform designed to be fast, simple, and most recently Singularity, an open source container platform designed to be fast,
simple, and secure. It's optimized for enterprise and high performance computing workloads.
And what's interesting is how Singularity allows untrusted users to run untrusted containers
in a trusted way. We cover the backstory, Singularity Pro, and how they're not holding
the open source community version hostage, as well as how Singularity is being used to
containerize and support workflows
in artificial intelligence, machine learning, deep learning, and so much more.
So Greg, I have to tell you that Singularity was not on my radar.
It wasn't on Adam's radar.
But we had two different people ping us, listeners of the show.
First, Jacob Chappell, aka PHBHavoc.
Thanks, Jacob.
And then Andre Marcelotanner.
I feel like I've ruined that name before.
KZap.
KZap.
KZap, we know well.
He hangs out on our Slack.
Within a few days, saying, hey, you got to do a show on Singularity.
And they both gave really strong pitches.
So it seems like maybe you have a hidden gem here.
There's lots of people using it, but there's a lot of other people who have no idea what Singularity is.
So let's just start off with what it is, and then we'll figure out why it's so quiet.
That is a great question.
So probably best to start off with a little bit of my background and what created it and what caused it
and why that there is this large amount of users. I mean, we've definitely hit critical mass,
but nobody knows of it. So it's just kind of weird dichotomy. And so to start off with,
you know, I've spent a number of years, almost 20 years working for Lawrence Berkeley National
Laboratory as a high performance computing architect. In that role, I had the opportunity
to work with a lot of researchers, a lot of scientists, and people that had problems that
they needed to solve computationally. In that role, I would develop these large HPC systems
in order to solve the problems or whatever they had that we had to build up for. So about, I'd say maybe six years ago now at this point, time flies,
scientists started asking for containers. And my site, as well as a lot of other sites,
started looking at the various container options that exist out there.
Mostly, it was all Docker at that point. And when we looked through this, and we looked at
the architecture of Docker, and we know the architecture of our HPC resources pretty well, we tried to superimpose one upon the other and found that it's nothing against Docker or the security or anything regarding Docker.
Docker is a fantastic solution.
It was designed for something very specific.
But that architecture didn't transpose very well to high-performance computing. I mean, we have, you know, sometimes 10, hundreds, or even thousands of unprivileged users.
Users that don't have root, users that are just regular users on the system, and they have shell access, and they're going to run their HPC and their compute-focused jobs across, you know, lots of nodes, maybe, you know, hundreds or thousands of compute nodes.
And the Docker solution was just not really fitting that need.
So I basically, I would respond to most of the users and the scientists and I'd say,
no, sorry, we can't support containers on the system.
This went on for, I would say, at least half a year of basically just having to keep saying, no,
I'm sorry, we can't do it. To the point where I did something kind of novel, which is I asked
the scientists and the researchers and the users, I said, what problems are you trying to solve?
What is the issue? And why are you asking for containers? And we got some really interesting
and fantastic responses. And a lot of it is along the lines of we needed things like
reproducible software stacks. We need mobility of compute. We need absolute control of the
environment that we need to run on an HPC system, and so on and so forth.
There's more.
And from the HPC side, from a system side, we're like, okay, well, we can't give any of these users root.
We can't give them any mechanism to get root.
So we have to build something that's very specific.
We have to support workflows like MPI and other compute-based workflows and resource managers, which are kind of like the equivalent of orchestrators in the enterprise world.
And we have to support all of this sort of stuff and support this infrastructure.
So I started looking at this and I said, well, okay, if I were to develop a solution from scratch, what would that look like?
And I prototyped something.
And when I prototyped it, I showed it to a few scientists who basically said, this is fantastic. When can you install it? I thought, well, first I need to
actually write it. This is a prototype, not something that I can actually install. I've
created various projects in the open source realm. And the one thing that seems to be somewhat
consistent is every time I start a big open source project, I know it's going to be successful when the first implementation of it,
the first version sucks and people laugh at.
If I write something and it gets out there
and people start saying,
this isn't all that good.
You're kind of on some good ideas here,
but the first implementation just kind of,
let's start over.
Let's wipe it and begin again.
Why do you think that is?
Oh, I don't know. I think I'm just weird um i see things a little differently and um so the first version
was actually ended up being extraordinarily like what ubuntu snaps is today very very similar to
that it was basically i was doing a pace of an application run as it executes.
I was watching all of the system calls of everything that it's opening and building
a environment, a reproducible environment based on that, based on that p-trace, that
run.
And then I would build what looks like a container out of that.
That was version one of Singularity.
Released it and everyone basically said the same thing.
This is great.
You're onto some really great ideas here,
but we need more to really make this fantastic.
So when people started articulating this to me,
I said, well, okay,
so we have to, we have to
revamp this a little bit. And, you know, I, I decided early on that the first major version
is going to articulate the format of the container image. One of the, you know, I'm jumping around
here a lot and I'm sorry about that. But one of the, one of the reasons why we call it singularity
is because it uses a single image
format. And that is used from day one. So we basically said that as soon as we hit the version,
as soon as we make that change in the container image format, we have to integrate,
excuse me, we have to increment the major version of Singularity. So within like two months, we went from version one to version two.
And version two lasted for years. And version two lasted for years. I mean,
the uptake was just phenomenal. Within about six months, it was installed on most of the biggest supercomputers in the world and just continues to grow through not just high-performance computing, but through this whole new area of enterprise-focused compute like AI, machine learning, compute-driven analytics, data science.
All of these new areas that enterprise, non-traditionally compute set focus centers are now
trying to do compute all of a sudden and singularity really being designed for
that,
for that compute focus and being really good at that and solving the problems
on data mobility,
containerization,
mobility,
reproducible workflows,
trusted containers,
right? Being able to sign your actual container image, reproducible workflows, trusted containers, right?
Being able to sign your actual container image, package that up, and then move it around,
and then know that you can always validate and guarantee immutability, guarantee the fact that it's not been tampered with.
Things like that.
Singularity, you know, this is basically in the direct architecture of Singularity.
I mean, these are some of the primary tenets of why we created it.
So it makes it a very smooth transition going from HPC and science-focused compute to things like AI and machine learning and so on in these other areas.
So we've built this critical mass.
We have a lot of users using Singularity at this point. I did a download and a GitHub clone count. And of course, you have to approximate this quite a bit. point and um we we can account for being installed on some of the the biggest supercomputers in the
world and actually even being on the rfps as line items for some of the biggest computers in the
world so um we we've have great advances you know we we have a huge community at this point
yet nobody outside of compute has even heard of Singularity.
So a couple of thoughts there.
First of all, we did a show about high-performance computing last year.
It was actually looking at the ship date.
It was almost to the day with Todd Gamblin, who's a listener and a friend of ours.
He works at Lawrence Livermore National Laboratory. And he was sharing this whole world that Adam and I know very little about. And there seems to be a gap between what we call, I don't know,
like industry or developers versus academia and research.
And the people who are doing high-performance computing,
there just hasn't been much overlap in knowledge sharing
and tooling and stuff like this.
And so that kind of sometimes explains why is this brand new to so many people?
And yet, like you said, inside of that community, you guys have hit critical mass.
You have huge computers running this.
You have NVIDIA using it.
You have Harvard.
You have all these very important institutions using it, but very few people heard about
it.
And it seems like maybe what you point out, I just want to highlight what you pointed out there.
It seems like maybe artificial intelligence needs
with deep learning and other high performance computing needs
are now moving over into enterprise.
And that might be kind of the tie that binds these two worlds together,
or at least starts to.
Yeah.
There is a person that I love to quote for this line. Al Guerra, a fellow at Intel,
has said that the cross-pollination of AI, traditional simulation, which is HPC and compute,
machine learning, and so on and so forth, we're going to end up with systems and technology
that is basically crossing those chasms.
You know, we're basically able to start tying some of this together.
Some of the compute side that's coming from HPC, as well as the compute side that Enterprise is looking at, and leverage the technology from both.
So, for example, we've been doing things like with distributed parallel jobs for a long time.
We know how to do that.
We know how to do things like parallel file systems.
We know how to do things like very efficient batch scheduling, right?
And on the enterprise side, the HPC world is like, well, now there's this new thing called orchestration. How can we redevelop some of our scientific workflows to be service-based compute versus
batch-based compute?
And how can we use that to do real-time analytics, data processing, and so on and so forth using
this new technology called orchestration?
And how do we cross that chasm?
And so singularity has been picked out as
being you know from from a lot of different organizations as being this primary area for
cross-pollination and um yeah so this is this is where we see a huge opportunity and and we're
starting to see a lot of uptake in the needs of, as I said before, enterprise-focused compute.
And we're also seeing that this is a new area, right?
So I spoke to one enterprise who is really leading the advancement in, for them at least,
in AI.
They're doing a phenomenal job there.
But at the same token, they don't have anything to do like with batch schedulers.
They don't have anything to do with distribution of jobs and actually building up an infrastructure and a resource that can actually support hundreds or thousands of these AI-type jobs and the training of these AI models. So we're seeing some really interesting necessity for that cross-pollination just because what's new in enterprise, HPC has been doing for 30 years.
Right.
And vice versa is happening, right?
Some of the new developments in enterprise and massive scale and support for this massive scale is now starting to get an interest in the HPC world, both science as well as commercially driven HPC.
That's awesome.
It goes to one of the things that I talk about a lot on this show, which is the cross-pollination
of ideas and techniques and even code from one industry to another or from one language
slash ecosystem to another or from one language slash ecosystem to another.
And just the benefits of that across the open source world are amazing to behold.
It sounds like there's a really big market opportunity here, maybe an arbitrage, where you have like, here's a bunch of stuff that HPC people are good at, and here's a bunch
of enterprises who are ready to make money off of these things, and they need those things.
And so maybe that's where Singularity and Scilabs try to sit in that gap and fill that need.
Can I bring you on my pitch
when I go out to VCs and whatnot?
Exactly.
Yes, yes.
Huge opportunity.
Yeah, there you go.
I just listened to this and buy this license
or whatever you're selling.
Yes.
Oh, pitch.
Yes.
So we're raising money for sure
okay so one of the things i didn't want to talk about you mentioned that your version one you
know it was it was really just a go at it and it was like this and kind of a proof of concept maybe
or um like you said if people are somewhat skeptical or they think you're crazy maybe
you're onto a good idea your version two seemed like maybe that was a semantic versioning kind of a thing where it was just like it was going to break existing users
and so that's why you went to one thing that kazap said in his pitch speaking of pitches on why we
should do this show with you is that there was this big rewrite for version 3 from python bash
to go and see just curious if you could maybe elucidate why they rewrite how did it go etc etc
yeah absolutely
and by the way I love
how you're reasoning through that
it was definitely a proof of concept
version 1 and we
totally meant to do that
in reality
that's funny
everything's intentional in retrospect in in in reality um that's funny he's meant to do that but
everything's intentional in retrospect right yeah exactly
yeah so i like i like how you you positioned it like that um take me on your pitches i'm i'm good
so um what was the question oh yeah yeah. So as we've been moving forward. So version one was this kind of
prototype, right? Proof of concept. Version two is, you know, it basically solidified the idea
and kind of the model of what we want to do and where we're going with it. But it was also
developed in a silo. So, I mean, it's over three years old at this point. This is before OCI
existed. This is before CNCF existed. It was pretty much, it was Docker.
Everybody talking about containers was talking about Docker. Docker, of course, wasn't the only
container system at this point, but it was definitely the lion's share in terms of what
brought Docker, what brought containers, excuse me, and created a household name for containers.
So most everything was really focused towards Docker at this point. We did some work to enable the compatibility with Docker because there were a lot of containers.
There are a lot of containers that are in Docker Hub and Docker registries and whatnot.
So we basically took the Singularity base, which is actually the first version of Singularities, as well as the second version, was predominantly C. And I wrote, you know, mostly in C. And we had some fantastic contributions
and people from the community that jumped on board and basically said, well, we've got a
whole bunch of containers that are out there in Docker. We need to support those containers. We need to somehow leverage those existing containers, that existing work,
bring that into Singularity in a way that makes sense, and then build support for that.
So that was what we've done through version two. So the first version of Singularity version two
didn't support Docker at all. But as we got to 2.1, 2.2, 2.3, and so on and so forth,
that was brought in via Python. So we had a bunch of code that was written. Vanessa Sokat from
Stanford has done a fantastic job at building all of that and bringing in that support for
compatibility with Docker. But in the end, it wasn in go uh and you know so it was kind of
it was kind of a re-implementation looking at the the public documentation looking at the api what
can we do with it and and how do we re-implement that so it never had perfect support for docker
um and and really it was just because we were not using oci slash docker code so
when we started evaluating you know what should we be doing as we're moving forward, Singularity
was really, we spent our life in a silo, in a silo of science and HPC, focusing on that
side of compute.
From the time that we introduced Singularity, we developed Singularity, and we're on our
own pathway, which is different than what the rest of the enterprise ecosystem was going in, right? We had OCI develop, CNCF,
and we had other container runtimes that are kind of coming up and gaining headway and gaining
traction. But they're all kind of still focused on the same kind of set of original goals as what
Docker was focused on. And here we are going in
a completely opposite direction. And to speak honestly, we're just as much to blame for the
lack of cross-pollinization as anyone because we weren't looking. We were not watching what
enterprise is doing. So we came up with a solution. We're going in our way, doing our thing.
And it wasn't until later when people started saying, we're using this for AI,
or this is the perfect solution now for AI or enterprise-focused compute. Here's the tools,
and here's the applications, and here's the APIs that we need to support as we're going
towards enterprise. And that has a lot to do with Kubernetes. And so we're like, okay, how do we now backtrack and start supporting a lot of what the enterprise and what the industry has already standardized on?
So now we have to back up and say, what's the best way of doing this?
Now, because we developed in a silo, things are really different for us than they are for what most people are used to
when they think of containers. So we do have a, it is a very different structure. It's a very
different feel for containers. So for example, a container is an actual file. It's a file that
sits on your computer. If you want to move that container somewhere else, well, you can SCP it,
you can FTP it, you can transfer it however you want.
You can put it on an NFS server.
Or in an HPC realm, you can put it on a parallel file server like Lustre or GPFS.
And you can run it from there.
And so it's a very different kind of look and feel.
So if you want to run a container, the command is literally singularity shell and pointed at that file if you want to run a docker container well the command is singularity
shell docker colon slash slash and pointed at that registry or docker hub or wherever it is
if you want to run from an oci bundle or something you know we support that too now so um to get back
on track my thoughts kind of bounce all over the place. I apologize for that. But we basically, we developed in a do we properly interface with things like Kubernetes?
How do we properly support new types of compute-based workflows?
How do we absolutely 100% trust the containers that we run on?
So things like cryptographic validation, things like encryption, right?
How do we do that?
So those are the things that we've been working on,
and we've solved most of them at this point.
But again, it's a different solution than what most people are used to.
So we're kind of a little late to the party,
even though we were early to the party.
It's almost like there's two parties, and then it's like,
let's merge these two parties into one.
Hey, everybody, did you know Singularity exists at this party over here?
But given your experience on open source, I'm curious why you think you operate in a silo.
Yeah, I was going to ask the same thing. Good question.
Yeah, it is a good question.
We operate in a silo just because of the ecosystem of what high-performance computing is typically like, right? We have these very large systems that are very different architecture than pretty much
everything that's being done, at least to my knowledge, on the enterprise side.
From our perspective, there's just not a lot of necessity for cross-pollination.
And going back three, four years, AI was just really starting to pick up steam and whatnot.
And most enterprises, it wasn't even on their roadmap yet.
So there wasn't even a necessity for any cross-pollination.
But from my perspective, that's really the gist of why it was kind of pigeonholed into just the HPC sector.
So another open source project that I created, which is still alive and kicking
and actually uses the basis of something called OpenHPC for provisioning, is a very large
scale operating system provisioning and management system called Werewolf.
And I founded that project in 2001, still lead it today, although honestly, I haven't
been spending as much time on it
for a couple of reasons. First off, Singularity has been taking most of my time, but also
it's incredibly stable. And the few amount of changes that we need has been basically driven
and spearheaded by the OpenHPC community, which is a Linux foundation and Intel project.
But there's not been much cross-pollination there either.
At some point, um, there were some fairly large web infrastructures that decided that
they wanted to use werewolf to manage their, um, their web server load.
And, but aside from that, I've never heard of anybody in enterprise using, using something
like werewolf to manage all of their servers.
What is the opposite of not working in a silo?
So if you're working in a silo, what does that look like?
Do you just not give talks?
Do you not talk to user groups locally?
Are your docs not open?
Are you not tweeting about it?
What does not being in a silo look like or the opposite?
So the HPC industry is really big um it's a 40 billion
dollar annual um at least by 2020 it's forecasted to be about a 20 a 40 billion dollar industry
there's a lot of conferences there's a lot of user groups there's a lot of meetups and all this but
it's all really that whole thing is kind of like siloed it's it's the example you guys gave about
you know there's two parties going on is i think a really good one because the people that were that whole thing is kind of like siloed. It's, it's the example you guys gave about,
you know, there's two parties going on is I think a really good one because the people
that were at that party almost never go to the other side and vice versa.
I mean,
there's just not a lot of cross pollination.
They've been two completely separate worlds for so long,
you know,
at,
at one of these conferences at a supercomputing event,
which, which last year we brought
in about 13,000 people, just to kind of give you an idea of scope. Usually wherever supercomputing
goes, we sell out the whole city. I mean, hotels usually going for like three to five times their
normal price because there's just no room anywhere. There were some people from the more
traditional enterprise side that are looking at some of the primary tools and resources that we rely on in high-performance computing, confused on why they even exist.
Because Kubernetes can do that.
Or something else can do that.
And there's just such a misalignment between the two communities. And that cross-pollination,
especially now as we're seeing the advent of things like artificial intelligence and machine
learning focused on the enterprise side, I think this cross-pollination is really fortunate
and it has to happen. I mean, it's going to happen. So that's, you know, and to go back to the earlier point, that's really why we've created
Scilabs.
Why that there's such an interest to commercialize this as an open source project.
How do we better support both sides of this?
And how do we sit in between and offer services to both sides and offer the benefit that both sides have been able to glean and then help to bring these two communities together?
This episode is brought to you by Clubhouse. of the biggest problems software teams face is having clear expectations set in an environment
where everyone can come together to focus on what matters most and that's creating software and
products their customers love. The problem is that software out there trying to solve this problem
is either too simple and doesn't provide enough structure or it's too complex and becomes very
overwhelming. Clubhouse solves all these problems. It's the
first project management platform for software teams that brings everyone together. It's designed
from the ground up to be developer first product in its DNA, but also simple and intuitive enough
that all teams can enjoy using it. With a fast, intuitive interface, a simple API, and a robust
set of integrations, Clubhouse seamlessly integrates with the tools
you use every day and gets out of your way. Learn more and get started at clubhouse.io
slash changelog. Our listeners get a bonus two free months after your trial ends. Once again,
clubhouse.io slash changelog. so greg we both mentioned psy labs off-handed we've obviously been talking about singularity. You said that the reason for Scilabs
at the end of the last segment, let's talk about that relationship, the dichotomy between the
open source project. Maybe you can tell us about singularity licensing and all that,
what Scilabs brings to the table and your thoughts on commercializing open source in general.
So, you know, with my prior hat on working for U S government
department of energy, building open source projects was always, you know, it's kind of an
incidental thing. You know, it's like, we need something let's build that. Let's hopefully the
community will get involved and it'll, it'll help. And, and whatnot, there was never a necessity
to build a business model. Now, as I've now moved away from Department of Energy, created a company,
and this company is built around the idea of an open source project that has gained a lot of
momentum, gained a lot of steam. And how do you take the open source users, the open source
community, and monetize it in a way that allows us to be not only sustainable, but hopefully a
little bit profitable and not alienate the open source community, not do anything that creates a
resentment or creates any sort of a misalignment, right? And that's a challenge. You know, there's
been a lot of companies out here that
have really tried to monetize on open source projects. And again, it's a very difficult
tightrope to walk. Red Hat has been incredibly successful at this, and there's been others as
well, but there's also been ones that have not been successful. And there's ones in our own ecosystem in the container world that right now is
trying to figure out still, even though they've done incredibly well for themselves from a business
perspective, they're still trying to figure out what that business model looks like.
So I have the luxury of coming at it from an open source side where I understand, you know, I've built communities,
I've built projects, and I've watched how companies have not done this right. And I've
seen only a couple that actually have done this right. And so I'm taking my own stab at it.
And basically, the main part of what we need to be doing is we need to be making every piece of software every line that
we write um for singularity has to be open source the first thing we do we don't have a private
repository uh for singularity within psy labs we basically push everything live every bit of
development everything we're testing everything we're playing with goes directly into the open
source community in a matter of speakingically, you can almost think of that
as Fedora. Then what we do is we'll take snapshots of that. As we've done releases,
open source releases, we will take snapshots and we will basically say, okay, we're going to
build this in a supportable way where we know exactly what it is. We know how we built it.
We know how it's supposed to work. It's curated. We know exactly what this is. And this is now a supported
version. We call this Singularity Pro. And we license that and we offer support on that.
But it is a feature equivalent to what's out there in the community. And we're doing that on purpose
because if we were to add any
additional features to that or make any additional spins on that that are not also available in the
open source version, then what we're doing is we're holding that open source community hostage,
in a matter of speaking, or that project. We're holding that hostage. And we're holding that
hostage to our business model. That model inherently is broken because as soon as you limit or you stunt the uptake of one, you're going to adversely affect the other.
So it is a mutualistic synergy between the open source version and the commercial version.
If the open source version does fantastic, we're hoping that we get some small percentage of that that will basically move over and become commercially supported.
And we will then be able to build a revenue line, build a business line.
Now, that's one business offering.
I've got a question for you there on that note before we move.
Does that mean that others can support Singularity as well like you are?
So I'm thinking like the Tidelift models, for example.
Yeah, absolutely. you are so i'm thinking like the tidal lift models for example yeah absolutely you can't you're not
the you're not saying that you're the only supporter of it just means that you're taking
those snapshots that you can put them to the side call it singularity pro and provide support and
licensing yeah and it it's it's a risk right yeah um but that's the risk with open source i mean
anybody can always fork an open source project and then and then spin it their own way i like
how you're not holding it hostage you said i. You said that's the point I wanted to get at
was that you're not even holding
the business model hostage.
We believe a hundred percent
that if you build a company and you build,
and this is one of the reasons
why I think my open source projects
have always done very well.
It's because I build these based on integrity.
I build it based on stated values,
stated ethics, and I maintain
that. I believe that the best project, the best product, and the best supplier is going to win.
And what makes someone the best is, well, you're not only doing a fantastic job of what it is that
you're set out to do, but you're also, you have a high level of integrity. You have a high level
of respect and you want to work with people. So if somebody else were to come along and try to fork singularity,
well, they're going to not only have to beat me on being better at support, but they're also
going to have to beat us at being higher levels of integrity and everything else. And if that's
the case, then they deserve to win. So that's the game that we're playing is we want
to be the best. And we have an advantage because the primary developers of Singularity, you can
believe I have lured and hired because it's not just me anymore. And as a matter of fact, I've
hired and recruited people into the open source community and into the company who are much better people at developing
software much smarter than i am because believe me i'm i many people can do this better than i can
um you know i told you about my version one already
so um i changed your direction go back to where you were going i i don't want to derail you completely um where was i going i don't know completely derailed good job adam sorry about that
so uh i'm trying to remember what you what the question was you were moving on from the fact
that you're not holding the open source hostage by describing oh you know you supporting it. Oh, yeah, thank you. So the first
product that we have is basically just a respun version of the open source
code that's out there. And we professionally support it. We offer professional services.
We offer support for it, education, everything you can imagine. It's
somewhat obligatory, right? We have a open source project. We have this piece of software
that we're supporting and we're maintaining out there. We have to be able to support it. We have to be able
to help people with it. So those are the obligatory kind of offerings. Then we have some uniqueness
that this particular container system offers. So for example, one of these is we support
cryptographic signatures. I've alluded to this already previously.
So if you were to sign a container, and remember, our containers are a different format from
OCI, right?
This is a new format.
The format is capable of supporting OCI and encapsulating OCI and Docker containers and
whatnot.
So we can take all of that and we can properly encapsulate
it into a single file. We no longer require any registries or anything to run that. But because
it's now in a single file, a single binary file that has an open standard behind it, we can do
things like cryptographically sign it. And in this file, which was originally modeled after the elf binary, we can add a object block for a cryptographic signature.
So now this cryptographic signature block can basically do things like guarantee immutability and whatnot of the file and accountability.
This is really interesting because when you validate, when you build a container, when you sign a container, it uses a very traditional PGP type
public and private keys to do this. So you're going to sign with a private key. Nobody else
has that private key. And when you distribute it, you want to share your public key out.
So when people validate, they're going to be validating against your public key and they
can guarantee again, immutability. And because they have your public key, they can guarantee
accountability. They know who signed it. So one opportunity that we have is to add value
to this open source project. So whether somebody's using this as an open source project or whether
somebody's using our commercial project, if we were to, let's say, host a key store for public keys, so it's very easy to cryptographically
sign your containers, push those keys into our key management service, and then however you
distribute your container, wherever you bring it, you can do a singularity
verify pointed at that file, and it knows how to contact our key store, or you can run
your own key store on-prem and validate that container and see who signed it.
So you can say, for example, now you can say definitively, well, if I trust Greg, I will trust any containers that
he creates. If I trust Ubuntu, or if I trust NVIDIA, or Red Hat, or SUSE, right? As long as
they have their key, they've signed it with their key, you can now absolutely guarantee that level
of trust. This is really important when you start thinking about some of the recent CVEs that just happened
within the container ecosystem.
There was a malicious container could actually do damage to the container runtime on the
host, such that, let's say, for example, you're using run C or using, you know, Docker or something.
All right.
If you were to spin up a malicious container,
it can actually create a Trojan inside that,
um,
that,
that program that basically spun it out.
So run C was,
you know,
had a CVE just recently against it.
Um,
because of that's not supposed to be possible.
Oh,
it's not supposed to be,
but it's,
I don't think
you understand how these things work that's why the signature is so important yeah so you can do
from our perspective you can do one of two things or you should do one of two things you should
absolutely never never run a container as root that's. Second off, if you have to run a container as root,
you should never, under any circumstance, run an untrusted container as root.
So here's a really simple example. As a system administrator, I imagine that there's a lot of
system administrators that are going to be listening to this show. So as a system administrator
who has root on a very high visibility production system,
it's probably discouraged. It's probably looked down upon to go to the internet,
download a whole bunch of random code and start executing it as root on your production system.
Right. You don't want to run untrusted code as root. Now the container ecosystems do what they can to isolate,
but, you know, there's always going to be this, you know, at least from my perspective, and this
is, you know, this is now I'm getting, I'm getting into religion. So I'm sorry if I'm going up against
somebody's religion here, but if you're, you know, POSIX was kind of defined around the idea of,
of users and privilege and whatnot. And we And we have a very strong standard for,
well, the super user is the super user, right?
When you're root, you're root.
The whole system is yours.
Now, what we're doing with the container ecosystem
is we're saying, yeah, but root over here
is not the same as root over here,
is not the same as root over here.
Everything's seeing kind of this different thing,
but at the same token,
POSIX has defined
and our traditional Unix standards
have defined that,
well, root is root, right?
So we're trying to limit what root can do.
And I know I'm going off
on another tangent here again,
but that's kind of the premise of this,
you know, the security issue
from our perspective, right?
If you want to limit the exposure,
well, don't run it as
root. But if you have to run it as root, make sure it's trusted. We offer a trusted solution here.
So going back now to the business model, and I'm sorry again for the tangent, in terms of the
business model, we have a key store. And this key store plus singularity plus the design of the
singularity image format gives us the ability to absolutely trust
these container environments.
So if you're going to run it as root,
you should run something you trust.
And we offer that as a service now.
So it is a free service that we're offering
and we're going to be figuring out
some way of monetizing something at some point.
The business people keep telling me that that's important.
So working through that.
But this is ways that you can-
Is it a popular service?
Say that again?
Is it a popular service at this point or is it still new?
We haven't even released at GA yet.
Okay.
As a matter of fact, we are going to be releasing at GA.
Soon to be popular.
Yeah, in about a month.
So we're expecting to see some increase and uptake. But at
the same token, this is still brand new for people, right? I mean, most people think of containers and
OCI has a portion of their spec talking about how to sign containers and whatnot, but it's signing
the metadata for those containers. It's not actually signing the runtime format, right?
Those containers are actually tarballs, and those tarballs get splat out to the disk,
and that creates new data.
The signed tarball, well, that relates to the tarball, not the new data.
And then that new data could honestly take a life of its own, and nobody would ever know.
So our format is the actual runtime format.
So, I mean, there's no metadata.
That's interesting.
There's no tarballs.
There's no, I mean, you just, you download like a 10 or 100 gigabyte container.
You type in Singularity Shell and you're instantly inside that container because it doesn't have
to splat anything out the disk.
And what you get is what's been signed.
Yes,
exactly.
Exactly.
So that's one of our,
one of our business models,
which is to add value.
Don't hold it hostage,
but somehow add value to that open source piece of software.
And we can do that commercially,
right?
That's a cloud service,
right?
That's not something that we're planning on opening.
So open sourcing,
that's a,
that's a cloud service.
So we're going to, so for now That's a cloud service. But you're making it free.
For now.
For now.
Well, we'll probably...
Yeah, so we've talked about the freemium thing too.
And I have mixed emotions about it.
But at this point, what we know for sure is we need to drive adoption.
We want people to use this.
We want people to be able to run trusted containers.
We want people to be able to leverage singularity in our format
and make good use of it within their environments, their ecosystems.
And we have to hit critical mass, right?
Or at least we're trying to.
We've hit it in HBC.
You mentioned pitch deck, so you're raising money, right?
Yes.
So you're going to have to hit critical mass.
That's the trade-off
right i mean if it was all bootstrapped you could open up is it called psy labs cloud i've been
reading about that is that what you're referring to the service yep uh you could open that up
just charge some money from day one and if it's you know if the gross revenues you know cover
your expenses and there's some leftover at the end of the day, you got your profit. But you're not going that route.
Business 101, y'all. Thanks, Jared.
You're welcome. I was saying, you're not going that route.
I'm taking notes.
You're going for the home run.
So we're trying to drive adoption and usage, not just get just enough to pay the bills. We want to actually encourage people to utilize this.
We want to help support the ecosystem
and change how people think of trusted environments.
We want people to feel like they can absolutely trust
whatever environment that they're in
and manage that environment like, well,
any other data that they have to manage.
Did you consider closed source?
Was it even a thought?
Because that simplifies business cases a lot, but it complicates other things.
So for Singularity, never considered it.
For our cloud development, I mean, it's a cloud service.
So that's not something that we're open sourcing. So that is
definitely a closed source. But we've actually gotten some really interesting information,
which is everybody's talking about cloud. Everybody's talking about getting all their
apps and everything up to the cloud. And so we developed this cloud service,
thinking everybody wants to go up to the cloud.
A really interesting spin on this is that, and it's something I totally didn't expect,
is that almost everybody, well, maybe not everybody, I'm exaggerating, maybe about half
of the people that we talk to wants to run that on-prem.
And I wasn't expecting that. So we are figuring out how to re-license and re-brand
our cloud services and allow people to run those on prem. And we have several different cloud
services at this point. The Keystore that I was mentioning before is just one of them.
Another one is a build service. So you can actually build containers without requiring
root or without requiring any sort of privilege escalation, because we have a service that does
that in a controlled way. We also have something called the container library. Now in compute,
and I'll give a little bit of background on this, in compute, there is various regulations in pharma, for example, and on the science side and the bio side, they have various FDA regulations even that they actually have to manage the environments for any software that contributes to a diagnosis of a medical issue,
anytime software is involved in that,
that entire stack has to be treated as a medical device.
And a medical device has to be archival for five to 10 years,
meaning we have to be able to reproduce those results
and reproduce that environment for five to 10 years.
Well, Singularity provides a perfect solution for that.
So our container library,
one of our services, is kind of built around the idea of what are the specific benefits of
this binary image format that we have, and how do we archive and always allow people to
go back to previous workflows, and so on and so forth. So that's a feature that we have in this container library.
And the other one is, you know, a lot of people talk about DevOps and they pass recipes around.
They pass the source code, in a matter of speaking, around for their environment. Well,
again, because our containers are 100% immutable and cryptographically verifiable, why don't we just pass the container around?
And then that way you never have to rebuild it. So it goes from the developer, you build that binary container, and then it can run through a DevOps pipeline that can be completely built up
for whatever the pipeline is and customized via CICD integrations and whatnot,
and then come out the other side
as this binary immutable image has passed all of this.
This gives us the ability now to inject things
like the security teams into the DevOps workflow, right?
Because once this pipeline,
once this container image is going through this pipeline
and it gets the security and security is like, okay, I've audited this.
I feel good about it.
I'm going to now sign it and then continue it on its way.
Well, we just added a cryptographic signature on it that wherever you run in production, you can say, I'm never going to trust this container unless it has this key fingerprint.
And if it has this fingerprint that your security team owns,
then I will allow it to run in production.
So it gives us the ability to do things like
inject security back into the DevOps workflow.
And it changes things.
It changes how we're doing this.
So all of this is in our cloud services
that we're building right now.
And people are asking for it
for not only cloud access,
but also on-prem. So in terms of, again, kind of building our model, the idea is if we're going to
build anything that's non-open source, it has to 100% add value, not hold hostage, but add value
to that open source code base. This reminds me of Isaac Schluter.
So we have another show. Dude, I was right there with you.
Were you right there with me?
Yes.
So we have a show.
I'll pitch Adam's show here.
Adam has a show called Founders Talk.
And his most recent episode of Founders Talk, we'll link it up in the show notes for listeners.
It's fascinating.
He spoke with Isaac Schluter, who's the co-founder and what's he now?
Chief product? schluter who's the co-founder and what's he now chief product former ceo chief product at npm
which is you know a hockey stick style a container package registry which is similar business to a
container registry um and he had very interesting insights which i won't share here you can listen
to all of what he says about going a service-based and on-prem at the same time
and some of the things they've learned so i'll just submit that to you greg is something that
you might want to listen to and learn from his experience not that it's going to one-to-one
match what we all are up to but you know anytime you can learn from somebody else's experience you
can save yourself bad experiences.
That just leans into the whole idea of cross-pollination.
Yeah. Right? This is the JavaScript
package management
world
potentially influencing Greg
and future stuff around this cloud.
Yeah. Exactly.
Thank you for that pointer, by the way.
Go check that out.
This episode is brought to you by Raygun.
Raygun recently launched their application performance monitoring service,
APM as it's called.
It was built with a developer and DevOps in mind,
and they are leading with first-class support for.NET apps and also available as an Azure app service.
They have plans to support.NET Core
followed by Java and Ruby in the very near future.
And they've done a ton of competitive research
between the current APM providers out there and where they excel is the level of detail they're surfacing. New Relic
and AppDynamics, for example, are more business-oriented, where Raygun has been built
with developers and DevOps in mind. The level of detail they're providing in traces allows you to
actively solve problems and dramatically boost your team's efficiency when diagnosing problems.
Deep dive into root cause with automatic link backs to source for an unbeatable issue resolution workflow.
This is awesome.
Check it out.
Learn more and get started at Raygun.com. Kubernetes is well known for its community.
If you have a conversation around Kubernetes, even the founder of it will say it's about community.
And they've had that lens since the beginning.
What is your perspective on community and how are you using that to grow?
Okay, so the community, the open source community and the idea of what it was to be an open source community and how to maintain an open source community has changed.
I mean, back when I first started doing it, the open source community
was brutal. People were mean, people were obnoxious, and they liked to prove everybody
else wrong and prove themselves being better. And it was competitive and whatnot. It took a very hard and calloused personality to be able to excel in it.
And I'm not that kind of a personality.
I've tried working with various other open source projects because I love the idea of
open source.
I'm a biochemist by degree turned into a computer geek because I thought it was totally cool
how we were able to create bioinformatics tools based on Linux in the mid-90s.
And I thought that was fantastic.
So I just immediately became enamored with the idea of open source.
But I didn't have the personality, honestly, to really be part and be so callous to be able to handle this sort of things and whatnot.
So I actually found it easier
to start communities and start them with a different tone, start them with a much more
friendly tone, a much more considerate tone. And people always felt comfortable.
So as an example of this, I would start up a mailing list, start up an IRC server or join Freenode or something
and start building up the community, start chatting with other people. Back in the day,
we had Fresh Meat. I don't know if you guys remember Fresh Meat where you post your open
source. I haven't heard that name in a long time. Just a quick funny funny story. My, my mother at some point decided to do a search for me
and Freshmeat came up and she was like, I don't know what Freshmeat was. I'm scared to click on
the link. Why are you on Freshmeat? And, but I mean, Freshmeat was really the big, you know,
way that you kind of get your new open source stuff out there. And so I post to Freshmeet and
build this community. But it was always, it was about just super friendly, wanting to just chat
with people, be real with people, be open to people, be open to new ideas, teach people,
bring them up to speed and whatnot. I mean, even early Werewolf days, early CentOS days,
people would ask questions. I mean, on the Cent know, early werewolf days, early CentOS days, you know, people would
ask questions. I mean, on the CentOS mailing list, people would say, oh my God, I'm stuck in Vim and
I can't get out. Ah, you know, and you know, they, they post questions, you know, really simple,
basic questions. And I'm always, and I'm always so happy, you know, to just help out. Fighting that fight on a daily basis.
Sorry.
No problem at all.
I'm just thinking of all of the... We have a tweet to prove it.
There's literally hundreds of developers currently stuck in Vim
as we speak, just trying to get out.
Listening to the show.
That's right.
And being a Vim, I always
ended up landing in Vim and did most of my development in Vim. So I feel the pain. And at the same developer, the project lead helping somebody with these
extraordinarily basic questions. And I think that is absolutely necessary. And if it's the fifth
time or the 10th time or the hundredth time that that question gets asked, it's obviously still an
issue. Still, you should still answer it. And referencing, you know, if you just did a Google
search and just look through their logs and the archives, you'd be able to find that, oh, I'm sorry, it's a pompous response.
And it's not very welcoming.
So I would always be very supportive, very appreciative of everybody that came into the group, whether they're developers, whether they're users, and set the tone right away as that.
And set it from the top down.
I didn't tolerate people that became the jerk mentality.
And it's nice to see that many more open source projects are adopting this behavior,
but I feel as though it was a critical facet for what makes a good, strong community.
And how do you develop that strong community and and um and keep it on track
um not sure if that completely answered the question but um that was that's been my experience
with with open source communities and and leading them and running them well i'd definitely say that
your success especially with centos and building that into what it still is today i'm not sure i
wasn't familiar with werewolf uh didn't hit my radar previously but i think it's it's safe to say that what used to be a competitive you know if you're looking at
open source competitively which some people do some people don't i think friendliness and setting
expectations and all the things that you were doing back then was a competitive advantage back
then like it sets you apart it's like wow this is actually a nice group of people to be around
i would agree with you that more and more it's becoming you know kind of table stakes uh to a
certain degree for successful communities and open source is to that be the baseline of of what they
do i'm curious with regards to specifically with singularity and the the dichotomy that we've been
discussing about between you know the commercial enterprise and the open source project. you know, contribute back to Singularity or to really buy into the project at a contributor level
with the enterprise attached,
you know, with the commercial side attached.
And then also even like,
do you want those kind of,
you know, there's open source projects
where it's like,
somebody had a good term for it.
It's like where you can look at the code,
you know, like it's open code
or it's viewable code,
but it's not as if the company actually
wants you helping unless you're doing a trivial bug fix or something. In terms of features and
direction, how do you manage the community side of those things when you're trying to build a
business around an open source project where maybe the community's contributions may actually go
against the business's interests.
So you brought up a couple of really just amazing points.
Thank you.
Tell me more.
So, for example, when you have a company that releases some software, and they release it to the open source community,
it's almost becoming a marketing initiative, right? They're not interested in the
collaborativeness and the openness and the community side of releasing software into the open source community, what they're really interested in is getting that stamp.
We are open source.
And they will just release it.
They don't take contributions.
They don't look favorably upon them.
Or when they do get them, due to wanting to run and manage and host all of
their own copyrights.
I've seen organizations that will actually rewrite PRs, rewrite patches using all of
their own resources because they don't want any contamination of copyright.
And so the fact that it is open source, it's a marketing vehicle, right?
So that is definitely, that's one side. There are companies out there doing it.
That is not our side at all. We are first and foremost, we started off as an open source
community. I developed a company, I have a lot of experience with open source. I'm a huge
open source advocate. And when I say open source, again, not from the marketing perspective, I'm an open source advocate because there are very important advantages that the open source community and the development model brings to bear. So we absolutely support collaboration. We want other companies,
other organizations, adding code, being part of Singularity, joining our Slack, joining our
Google group and contributing, whether it's just simple, here's my experiences using it,
whether it's documentation, whether it is working on the core code, or whether it's even going out there and just speaking at events and user groups and whatnot.
We absolutely want it.
We've had contributions from companies, from individuals, obviously, from companies.
We've had a lot of academic and other government involvement.
So all of this is incredibly important to us, and it is extraordinarily appreciative.
And this is why we put every bit of our code, first and foremost, into the master branch.
Because we want to engage in that collaboration. We want to foster that communication and build a product
that is both a project as well as a product that is really meeting the needs of the users,
right? We're not out here trying to push something that doesn't exist. I mean,
everything that we've done is because we're solving a problem that our users are having,
that people are complaining about, and we're solving pain points.
So that's kind of our model.
And if we're not engaging with that community of users,
we have no idea if we're even solving the right points.
And honestly, I want people solving those points with us.
A question on the pro versus open source.
I'm curious, just because I'm not involved
in the details of this,
how usable is Singularity on its own
as just open source
and not via the stamped version?
Is there any incentive
to use the straight up open source version
versus Singularity Pro?
If you don't need commercial support,
there is no advantage at all. Go use the open source stuff.
If you represent a company, if you represent an organization that doesn't want to rely on
best effort support from a group of people sitting in a Slack channel,
then that's when you want to contact us. Right.
But in terms of individuals, in terms of developers, in terms of contributors and many
organizations that honestly just just want to work with open source software, go use
it.
There's no limitations.
We are not holding it hostage and we encourage it.
We would love people to be using our open source software.
Well, I don't think anybody will second guess your bona fides or however you say that, considering just your long history of building open source communities and projects.
So that's awesome.
How has it grown so far?
I mean, you have people out there who are championing this.
You have, like you said, we've named off a few of the organizations who are using Singularity. How has the contributor base grown beyond
Scilab's quote-unquote walls? I know you're all remote and around the
world, so there aren't any walls. But beyond your payroll, have you
had an uptick in not just users, but as NVIDIA,
these other organizations, are they getting involved and really making
this feel like a community-driven driven project or are you still trying to get that ball rolling outside of Sci Labs?
It's kind of funny because every time and not every time, mostly when we have people join the join our community and they start being really productive and they're adding features, they're adding code,
and they start to really kind of get used to us. And everybody starts to, we all like each other,
we're all friends and we joke around a lot. I typically, and maybe this is good, maybe this
is bad, I don't know, but I usually recruit them and I, and I try to get them to work for Scilabs.
And, um, in doing so a lot of times what we're seeing is, well, you know, a lot of times
you're gobbling up the community.
Yeah.
And I don't know if that's good or bad, but, um, the fact is it's actually, it's, it's
hard to find, um, you know, it's hard to find the development and the developer
skills that we need to basically run a and and create a whole container platform yeah um it
requires not only um a lot of knowledge you know in the in the in the upper end of of application
design but also going all the way down to kernel. And there's not many people that
are really anxious and eager and love to do operating system coding anymore. To kind of
poke fun of this a little bit, we see, and we saw this as well when I was working for Department of
Energy, as we're trying to recruit scientists, it was really hard because, you know, coming out of, you know, the universities and the PhDs and whatnot,
instead of doing research in science, they wanted to develop games for the iPhone, you know,
they were developing a dog walking app or something instead of, you know, you know,
something, you know, solid, instead of wanting to cure cancer or something along those lines.
So it was, it's, you know, we've seen it as well,
again, with my other hat on, my previous hat,
as well as from Scilabs.
It's hard to find the right people.
And it's really nice when we do have an open source community
because we are attracting individuals,
not just corporations, but individuals,
individuals at those corporations.
But I mean, there's the personal side of it and you get to, you get to meet these individuals
and, and develop relationships with them.
And as you do it, you know, it's, it's really easy to offer them jobs.
So the takeaway here, developers out there in developer land is hop on the singularity repo start contributing back
significant things and you're gonna end up on a job at scilabs everybody's there expect an email
make sure you have your email and your github profile so it makes it that's right here
yeah exactly so to get back to the question on a more serious note is, you know, we've had a lot of people join our
Slack, you know, both, you know, in contributing to GitHub as well as joining our Slack, being part
of our mailing list. We have an extremely active and friendly email list, the same thing on the
Slack side. It's a lot of, you know, very friendly people, you know, just all, you know, honestly,
it's as much just, you know, idle chat and, you know, getting to know each other and having fun, as well as, you know, developing code and kind of coming up with new and innovative ideas for doing kind of really amazing things.
You know, we have a lot of people that are involved with the process in terms of GitHub.
Actually, I don't even want to quote a number because I don't remember, but I think it was, you know think it's not a huge project when you compare it to something like Kubernetes and whatnot.
But I think it's about 1,000 stars, and now it's making me want to look.
I think it's like 1,000 stars, and I think it's under 100 contributors, but it's a good amount.
We're all very appreciative of everybody who wants to join the community.
And again, someone doesn't have to be a developer to join.
As a matter of fact, we encourage non-developers as well,
because the amount of benefit in terms of feedback,
in terms of just looking at things, being part of this, helping potentially with documentation,
even just pointing out bugs,
pointing out issues that they're finding
or being a conduit for reaching other people is so valuable.
It's so helpful.
And so we are very receptive of that.
On the repo now, you got 98 watchers 998 stars and 254 sorry 252 forks on singularity so
you know it'd be fun let's let's after this after this podcast is um it gets out there it'd be fun
to see how much this increases all right okay listeners get out there singularity start that
make us proud.
That's right.
Because we know that all value in open source can be derived by star count.
That's like the ultimate goal, right?
That's true.
That's the only metric that matters.
That's the only metric that matters in life.
I'm curious your focus, though, when it comes to the future.
You mentioned community.
You mentioned business.
You mentioned your principles around open source.
Where are you personally placing your focus around Singularity and SciLabs?
What are the biggest challenges you're facing today to move forward?
We are placing a lot of emphasis and investment in everything computational. So somebody asked
me recently, are we an HPC company? No, we're not. We are a compute company. We are focusing on
all of the different types of compute-based workloads that are out there. And we want to
use all of the cool tools to do that. Everything from Kubernetes to HPC resources, InfiniBand,
parallel file systems, batch scheduling systems, and go all the way out to edge and cloud and IoT. This is where we're spending a very reasonable investment
in terms of moving forward.
We want to facilitate the movement of AI workflows.
So for example, as opposed to more traditional compute
where basically you have a big HPC cluster
and you run everything on that big HPC cluster, there's a lot of AI and ML workloads that are distributed. You may need a
big HPC type system to train that model, but once it's trained, you now have to distribute that
model wherever you're running your inferencing. And in some cases, the inferencing and where you're executing these models is not
extraordinarily, it doesn't need a huge amount of resources in terms of compute in many cases.
Some cases, of course, it does, but we're seeing really different types of workflows.
And these workflows, in a lot of cases, we're still doing science, trying to figure out how best to support and optimize these workflows.
But these workflows are really interesting to us because Singularity offers a really interesting and elegant solve for how do you take these workflows? How do you, for example, how do you build and train your model? And then
how do you distribute that model to where you're doing the inferencing? And where you're doing
that, I mean, it could be any sort of workload, right? You could be doing streamed AI. You could
be looking at data, data analytics. You could be doing all sorts of different types of things. But
the workflow as a whole, and how do we support
that?
How do we create an architecture to build a pattern that we can replicate easily and
better enable easy wins where everybody's looking at AI?
I mean, that's one of the big areas that big organizations are like, AI is on our roadmap.
We want to get there.
We want to do it.
But we don't know how to enable any quick wins.
And it's really complicated.
And it's a really high lift.
And we're just going to keep watching it for now.
We have this really cool technology that enables those quick wins, that enables the distribution of those workloads,
that enables mobility of models. And that's something that we find really interesting as
we're moving forward. You mentioned someone in the show, I can't remember if it was just in a
breaks only or if it was in the show, jokes around slides and Jared helping you with the next VC pitch.
So you'd mentioned that you're raising funds.
If there's any venture capitalists listening, are you seriously raising funds?
Do they reach out to you?
What's your state of, I guess, fundraising?
And how does that play into the sustainability of the project?
Great questions.
We are seed funded at the moment. And right now we're burning on the seed as well as living off our revenue. So we are going to be doing a Series A pitch. evaluating our series a pitch you know and doing comparables it seems like all of our comparables
are more like series b and greater uh just because we've de-risked the company at this point so
greatly so um but yeah there is a raise going to be coming up here pretty soon and um yeah happy to
happy to have any introductions if anybody listening is uh it's interesting i say that
because just a few shows ago we did an entire show with
joseph jacks around oss and you know commercially backed or venture-backed commercial open source
so i'm sure we're picking up more interests around there and i'm sure that any new audience
from that kind of show too just is listening plus i'm sure there's a lot of uh vcs out there
in this in that venture capital world paying closer attention and they're listening to shows
like this to get insights. We should have a changelog discount.
If anybody comes and they reference changelog, we can have a discount code.
Right. Give me a million and it'll actually
be, we'll act as if it's 1.2.
Right. You get an extra 5% ownership. And it'll actually be, will act as if it's a, you know, 1.2.
Yeah.
You get an extra 5% ownership.
Right.
That's no big deal, right?
5% isn't that much, right?
Not amongst friends.
We didn't really mention the Mac app either.
I don't know how much that plays into it, but, you know, maybe you paint the picture of the future of getting involved. The, you know, people who want to play with this.
There's a Mac app.
There's some user groups coming up help people that are looking to get plugged in get plugged in fantastic point as part of our you you mentioned the mac um uh support uh something that we are
working on right now and should have something released by next month is something called
singularity desktop which is basically just being able to run
Singularity in all your containers on your Mac.
And again, the command line interface for Singularity is incredibly simple.
So it's like Singularity Shell pointed at your Ubuntu container or your CentOS container
or SUSE or TensorFlow even, and you hit enter and you're now sitting on your Mac.
You're now sitting in that same terminal.
We're now running Linux, running inside that container.
No dependencies or anything.
You just install Singularity.
It manages all of that operating system support and whatnot.
Also going to be doing the same thing for Windows here in a little bit.
But of course, we're first hitting with Mac just because it's a little bit closer to home in terms of how to enable that on the Mac.
So that's something we're working on.
So imagine you don't need a VM anymore, install VMware or anything to run Linux on your Mac.
So that's something that we're working on.
The other thing as well, again, this is kind of on the community side, is we've had a lot of people really interested
in a user group. So we reached out to some people and we said, hey, is anybody interested in this?
San Diego Supercomputing Center basically raised their hand and said, yes, we're really interested
in this and we'd love to help support this. So SDSC, the Supercomputing Center in San Diego,
is going to be hosting our user group.
It is next month.
We just closed our CFP.
So we're now just basically building up the agenda.
But if people are interested,
maybe that's something that, I don't know.
Guys, can we post the link for that?
Link in the show notes for sure. We'll make sure we get that link yeah for that link in the show notes for sure
we'll make sure we get that from you and put that in the show notes so listeners you know that when
you go and you listen to a show like this you know to expect uh great links in the show notes so
hit that up links there very cool i i didn't want i didn't want to presume that i could
plug in and and whatnot no please do too much. So, but yeah, so that's happening next month.
We have some great talks aligned.
It's going to be mostly focused on the containerization side of compute.
So if people are interested in compute and whatnot and how to containerize those workflows,
both HPC, science, as well as AI and enterprise-focused workflows. I encourage you to check out that user group.
Good deal.
Thank you so much,
Greg,
for coming on the show,
man.
It's been awesome to hear from you and just the Jared.
I'm so glad we're,
we're in the know.
Now I feel better about my life because now we know about singularity.
And that's right.
Maybe that's what the audience feels like too.
So audience,
if you do,
we have discussions on our show. That's right. So go what the audience feels like too so audience if you do we have
discussions on our show that's right uh so go back to change all.com uh look up the podcast
every single podcast has discussions now so greg i'm sure you're gonna be tuning in and listening
to our community coming there and sharing more stories with you or questions or whatnot so if
you've got those questions head to uh atelog.com and drop in the discussions
and have a chat with Greg
and the rest of us about singularity
and the future of where this is going.
But Greg, thank you so much for your time.
It's been a pleasure.
I love chatting with you guys.
And both the on-show
as well as the off-show discussions
have been fantastic.
A lot of fun.
Enjoyed it thoroughly.
And if you guys ever want to chat with me again
open welcome all right thank you for tuning into this episode of the changelog hey guess what we
have discussions on every single episode now so head to changelog.com to discuss this episode
and if you want to help us grow this show, reach more listeners, and influence more developers,
do us a favor and give us a rating or review in iTunes or Apple Podcasts. If you use Overcast, give us a star.
If you tweet, tweet a link.
If you make lists of your favorite podcasts, include us in it.
And, of course, thank you to our sponsors, Linode, Clubhouse, and Raygun.
Also, thanks to Fastly, our bandwidth partner, Rollbar, our monitoring
service, and Linode, our
cloud server of choice.
This episode is hosted by myself,
Adam Stachowiak, and Jared Santo.
And our music is done by
Breakmaster Cylinder. If you want to hear more
episodes like this, subscribe to our
master feed at changelog.com
slash master
or go into your podcast app and search for changelog master.
You'll find it.
Thank you for tuning in this week.
We'll see you again soon.
I'm Nick Nisi.
This is K-Ball.
And I'm Rachel White.
We're panelists on JS Party,
a community celebration of JavaScript and the web.
Every Thursday at noon central,
a few of us get together and chat about JavaScript, Node,
and topics ranging from practical accessibility to weird web APIs.
Jared, I just have to ask a very serious question.
When you're using that operator,
do you actually blurt out, bang, bang?
If you're working in an office, would everybody just look at you?
I don't blurt it out, but I definitely say it in my head every single time.
Join us live on Thursdays at noon central.
Listen and Slack with us in real time or wait for the recording to hit.
New episodes come out each Friday.
Find the show at changelog.com slash jsparty or wherever you listen to podcasts.
I'm Daniel Whitenack. And I'm Chris Benson. We host Practical AI, a show making artificial
intelligence practical, productive, and accessible to everyone. You'll hear interviews with AI
influencers and practitioners, and we'll keep
you up to date with the latest news and learning resources so that you can cut through all of the
hype. In terms of environmental sustainability, Microsoft has won numerous awards for that. We've
been carbon neutral since 2012. But the way we look at it is even if Microsoft was absolutely
perfect, there's only so much impact Microsoft as a company is having just in our own operations.
So how could we scale out even more?
AI for Earth was really our answer to that question.
By dedicating this $50 million over five years, that enables everyone to be able to partake.
New episodes of Practical AI premiere every Monday.
Find the show at changelog.com slash practical AI or wherever you listen to your podcasts. you