Tech Over Tea - Future Of Open Source Image Editors | Graphite Editor
Episode Date: December 5, 2025Today we have 2 developers from the Graphite project on the show to talk about the project that could very possibly become the future of image editors.==========Support The Channel==========► Patreo...n: https://www.patreon.com/brodierobertson► Paypal: https://www.paypal.me/BrodieRobertsonVideo► Amazon USA: https://amzn.to/3d5gykF► Other Methods: https://cointr.ee/brodierobertson==========Guest Links==========Graphite Website: https://graphite.rs/Github Repo: https://github.com/GraphiteEditor/Graphite==========Support The Show==========► Patreon: https://www.patreon.com/brodierobertson► Paypal: https://www.paypal.me/BrodieRobertsonVideo► Amazon USA: https://amzn.to/3d5gykF► Other Methods: https://cointr.ee/brodierobertson=========Video Platforms==========🎥 YouTube: https://www.youtube.com/channel/UCBq5p-xOla8xhnrbhu8AIAg=========Audio Release=========🎵 RSS: https://anchor.fm/s/149fd51c/podcast/rss🎵 Apple Podcast:https://podcasts.apple.com/us/podcast/tech-over-tea/id1501727953🎵 Spotify: https://open.spotify.com/show/3IfFpfzlLo7OPsEnl4gbdM🎵 Google Podcast: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xNDlmZDUxYy9wb2RjYXN0L3Jzcw==🎵 Anchor: https://anchor.fm/tech-over-tea==========Social Media==========🎤 Discord:https://discord.gg/PkMRVn9🐦 Twitter: https://twitter.com/TechOverTeaShow📷 Instagram: https://www.instagram.com/techovertea/🌐 Mastodon:https://mastodon.social/web/accounts/1093345==========Credits==========🎨 Channel Art:All my art has was created by Supercozmanhttps://twitter.com/Supercozmanhttps://www.instagram.com/supercozman_draws/DISCLOSURE: Wherever possible I use referral links, which means if you click one of the links in this video or description and make a purchase we may receive a small commission or other compensation.
Transcript
Discussion (0)
Good morning, good day, and good evening.
I'm as well as your host, Brodie Robertson.
And you have no idea what I have just witnessed for the past hour
for this episode to get started.
It has been, it has been an experience to get here.
But now we're live.
And I was surprised by how many stars the project had on GitHub,
because I've never run across it.
So today, we have two of the people involved in the...
Graphite project, how about you introduce yourself and just explain what the project is?
Yeah, so now that we do have our audio setup and our visual setup after our setup that you
were witnessing for us, yeah, so Graphite. I'm the project founder. My name is Kevan, and basically
we're building what I would like to be the graphics editor for 2D graphics that we can all,
as an open source community, be proud of. And really, parade is not just being an option, like a backup.
option, but more of the best option that can possibly exist in the entire industry.
That has been my goal for quite a long time, and since I started building it four and a half
years ago and getting this community together, it has really taken off, I would say, in terms
of its actual technical engineering, and we continue to make progress towards being more and more
of an actual application that people can regularly use and be happy about using, because our goal
really is to make something that has no compromises in terms of really anything, in terms of
usability in terms of its actual feature set, in terms of not only beating the open source competition,
but also the commercial competition in time. And really the goal here is to build what can be
kind of described as the superset or the generalization of all 2D graphics software that you can
ever conceive of by building something that is unified in its grand vision. That's sort of talking about
like the grand, you know, the grand unified theory of physics or something where you have one
theoretical model to put everything else inside of.
We're building what is essentially you could consider it like in the same way that a game
allows you to build any game where you're building a graphics engine that allows you to
then contain any two-d graphics workflow.
So as the tools can allow you to draw vector graphics or raster graphics or page layout or
animation or motion graphics or material design or really anything like that or of course
photo editing, image manipulation, it's like one engine with tools that are
are, you know, artist-friendly, designer-friendly tools that manipulate the engine for you
and essentially create your graphics in a way that can be described with a single unified data
model and you can mix together raster and vector and animation and combine this all into one program
while still staying within the ecosystem of what is basically one unified editor. And we've also
started out just for knowing what exists today. We've started out with building that engine
and that's a big part of what Dennis here has been very heavily involved with from a technical perspective.
But we've also then had vector graphics.
So you think, you know, Inkskaper, Adobe Illustrator, those programs being kind of the main focus for where we're beginning.
Because vector graphics is kind of the primitive from which you build up everything else.
It is kind of the most fundamental form.
You're describing things with curves and with lines.
And from that, you can then build up and say, oh, well, if I have this sequence,
of lines that can describe a brush stroke, and then a brush stroke can be Raster, and
then you can render Raster, and you can use that for masking, and you can use that for controlling
how you edit images. So we're starting out with the basics, and it also allows us to continue
to make the technology more and more mature with the overall engine, and then we'll be moving
more towards Raster next year, and yeah, many things beyond that. Of course, animation, we have that
now as well, but that's going to continue to be more and more supported, so you can mix
together design, image editing, that sort of thing with, you know, throw in some animation while
you're at it. It's all pretty unified. If anyone couldn't tell, Kivon has been the one who's
been doing the videos on the updates. He's very good at talking on camera and see. Dennis, introduce
himself. Yes, continuing the introduction. I'm Dennis. I studied computer science in Germany,
and we're also currently recording here in Germany.
And if you couldn't tell, Keevan is the one with the vision,
and I'm here to make that actually possible,
to make it feasible to tractable to compute this.
And there's a quote from a German politician,
those who have vision should see a doctor,
and I'm here to actually make the vision's reality.
after all his screen
name is true doctor
so he is of course the true doctor
oh was that the end of your interest
oh well I thought I might keep a short
I have a pepperminty here
and I use
on one of my devices I use art by the way
the other one is currently in Nixos
and yeah
okay and one of the things
that always annoyed me like I used
to love photo editing and was using Adobe Photoshop,
but then I moved to Linux.
And it just wasn't as like it didn't bring me joy.
It wasn't as nice.
So I've been longing for some alternative
to get back into like creating.
And that is the vision and the reason why I joined
the graphic project.
I guess similarly to add on to my background a little bit
since I glossed over that part,
since probably like elementary school back when
my computer lab in elementary school.
We had four computers off in some room,
had flash on it,
and as well as some really old version of Photoshop.
And going from like turning my classmates' hair purple
and that kind of like, you know, basic stuff up to doing some,
you know, within a few years, doing some freelance design work for like some book covers.
And, you know, just really teaching myself computing in general
through the means of creative software.
That has been sort of my upbringing and how I, you know,
a programmer and a product designer and a graphic designer nowadays but that has been sort of my upbringing
making flash animations and then eventually that moved into you know also little flash games
but then that moved into some unity development some web development and that's kind of just been
my background i've generally been motivated by kind of frustration with other software seeing whatever
it might have been like back in the days i was trying back in the day i was learning to do app development
through frustration with iTunes being a really terrible way of playing music, your local music
collection. And I never ended up completing that project because I restarted it like five times
as I learned more and more and got better ideas for the technology stack. But yeah,
building basically a better music player app that has consumed a few years of my life as I kept
restarting the project and improving the design. But for me, product design has always been
sort of an intuition. It's always just kind of clicked for me. It's like I see a bad design. It's like
it's so obvious to me in my mind about how this could be improved with things that seem like
they should have been obvious design choices, but I guess they never occurred to the designers
or they had some sort of budget or time or other sorts of technical limitations, and they were
not able to achieve those better approaches that seemed obvious to me. But that's basically
through frustration with other software. That has been my motivation to build and teach myself
all of the different skills for both engineering and designing, yeah, better software.
That is, that's what drives me.
Okay.
So where did the project start?
So you said it's been about four and a half years, yes?
That's the time that we began developing it.
I have had the ideas rolling through my head because also I forgot to mention I've been using Blender
since 2.49 quite a long time ago right around the era before the open movie project
Sentel came out.
So, yeah, quite a while for, I guess, probably many people who have joined Blender in its more mature days, but it certainly existed plenty before I started using it as well, but I was using it in like middle school to do my middle school film project.
We had to do like some sort of film projects, and I animated everything.
But, yeah, so Blender has always been a really big inspiration to me.
Even back then when certainly its quality wasn't nearly as good as it is today, now it is truly industry leading.
But I always thought that, you know, despite the difficulties of its UI, it's, I never thought it was really that bad, simply because 3D software is inherently complicated, whereas 2D graphics software really doesn't have to be inherently complicated. There are many ways of simplifying it and keeping it reasonably simple. And I've then, you know, grown up using Photoshop and grown up using Illustrator. Well, Photoshop's a better example of a better user experience. Illustrator doesn't quite manage that same bar. But Photoshop, I would always have thought, I would always describe as being pretty intuitive, like,
There are technical things you have to learn, and you have to, you know, you have to find a solution to a problem that you have
using a certain set of operations that you have to learn over time. But once you do that, you can really do anything.
And they're not inherently that complicated. They're not buried behind too much UI.
Any technical field is inherently going to be technical and complicated, but it presents the information in a way that the application doesn't get in the way.
Yeah, exactly.
So the goal really is to have an application that can stay out of your way, but also be enabling whatever you might need to get done.
So that's sort of in my design philosophy, and I've always really appreciated that Photoshop has done a really good job with that.
And I have quite a lot of respect for the software, because it is in my book, quite awesome software, actually.
It has limitations, and those are some of the limitations that have inspired my design process throughout the years.
So we were talking about the timeline for four and a half years ago.
we started coding this, but since I started Blender, I don't know, 15-ish years ago, whenever 2.49 was,
since then, I've been sort of like, what if there was an equivalent to Photoshop that was also free and open source like Blender was?
And, you know, didn't think too much of it in the days, but then over time, you know, ideas kind of percolating in my head and figuring out what that could be.
Eventually, I started realizing, okay, the main limitation with Photoshop is that even though it has support for a lot of non-destructive operations, and we'll talk a bit more about non-destructive for anyone,
who's not familiar with what that means, but non-destructive operations are really, like,
that is the pinnacle of what graphic design, you know, any kind of creative workflow should
be about is supporting non-destructive editing as much as possible. And Photoshop goes a pretty
good way towards that, but it has limitations. You at some point have to start making destructive
operations, which means you have to bake in your decisions and you can't go back and make changes
after that point. And my entire sort of philosophy was what if it was possible to use nodes
that's really popular in 3D software with shader editing and substance designer style material
editing. And, you know, I didn't know about it at the time, but Houdini is a really good example
of another professional application that's frequently used for VFX in Hollywood and for game
development, creating like simulations and creating, you know, procedural town or city generation
or, you know, many, many combinations of assets,
creating like a process, creating a workflow, a procedure,
like an algorithm to generate whatever you might think of,
whether that's a building generator
that can generate any building based on any floor plan.
You're basically taking data as input
and producing some sort of final output
based upon, you know, some final asset as your output.
The idea I thought of is, like, it seemed really obvious to be,
can we take the ideas from 3D,
which is node-based generation that allows you to make non-destructive as the core idea of the
workflow, non-destructive editing. And can we take that and put it into the 2D context? And no one's
done that. I have no idea why. I mean, I guess the answer is that we're finding out here. We knew
it would be hard, but we're finding out that it is even harder than we might have even thought
it would be, which we thought it would be quite hard. But it is ultimately, like the goal here is
to build this non-destructive procedural graphics engine using a node-based system to,
encode your artwork while, and this is the really, really important part of our core design
philosophy here, making it wrapped up inside of an editor that doesn't feel like or even
doesn't expose the fact that you're using nodes at all unless you choose to opt into that
complexity. And it's not an opt-in, like you have to permanently flip a switch and now you're
stuck with that complexity. It's a, you're using layers like you're used to in a regular
graphics editing software. But at any moment, if you feel like it, you can go just look at
the node graph, just open the button that brings you to the node graph, and you can
start modifying nodes and then you can just close it and never look at them again and it doesn't
opt you out of layers or anything. So the idea is that you're not stuck with it. If you choose to
opt in, which is the case for some software, you have to like permanently say, I want to go out of
the basic workflow and move into the node-based workflow. You never get to go back at that point.
The other part is that for users who want to work with a traditional editing experience, they
never have to think about them. The tooling abstracts over everything that you actually interact with.
If the nodes are done in the backend, visible if you choose to open it, but otherwise, never
a thing you have to concern yourself with if you're a user who is not familiar with that complexity.
And we should probably at some point also address your question.
So you were asking about the development timeline.
So four and a half years ago, was that when we started the...
Yeah, February of the Rust Game Dev meetup in 2021, I believe.
So basically, the origin story is Keevan made a design mockup.
Yeah, that was like started a couple years before.
actually. So I got to reuse some college credits, basically. I was in my last year or so of graduation
and got to reuse some, like get some credits basically to get out of college in time while at the same time putting time like hundreds of hours into doing the design mockup and moving that significantly forward to the point where it was something we could actually use as sort of a guiding light towards building the system.
Exactly. And there is the Rust, or was the Rust GameDev meter, which was just streamed on Twitch. And Keevan went on there and presented his vision for graphite. We didn't really have much functional code really then. But I happened to watch that live stream and got engaged with the project, joined the Discord. And yeah, since then, I've never stopped. And we started laying the foundation, building the editor.
And it's been going great since then.
Yeah, a huge amount of technology and a huge amount of evolution of basically how do you take shortcuts in some areas that allow you to progress in other areas because you can't do everything at once or it's like painting a painting.
You can't paint the tiny corner of the painting and then move on to the next little corner next to that.
And eventually after 20 years, you have a completely highly detailed painting finished.
You have to take shortcuts, draw sketches and then start filling in whole areas.
of basic paint, and then eventually you can start adding more and more detail, but you can't
just start with everything at once.
So software development really is about where can you strategically take shortcuts, and strategy
really, really important part because there are many projects that fail based upon not having
the right strategy or the right leadership to get that kind of thing figured out in a way that's
able to execute in a way that's simultaneously useful to users in the short term and in the medium
term and in the long term, but also able to produce something that ends up having not painted
itself into a corner and get stuck with permanent decisions. And one of the really, really important
philosophies for me is that we don't want to ever permanently paint ourselves into the corner
of being unable to make a certain decision later on. The lofty goal, and I realize it's very
lofty, is that we never want to say no to a feature. And I don't mean feature creep. There's a very
big difference between feature creep, which is bad, and making the ambition of how to
having something generalized enough to be the best possible editor based on anyone's definition,
anyone's reasonable definition of what could be considered the best editor.
Because if it can be generalized enough in its overall vision to support every use case because
it is ultimately a programmatic engine.
In the same way a game engine, again I mentioned earlier, a game engine can be considered
generalized enough to support any game engine or at least a sufficiently powerful one.
Like game makers, I think, focused more on 2D, so you probably wouldn't be able to support a 3D game.
we want to make a generalized enough engine, like, for example, Unity, that supports
Td and 3D.
I'm not saying that graphite is intending to support 3D in that maybe some little ways,
but not, it's not ever intended to be, like, truly a competitor to, like, Blender, for example,
because Blender does an amazing job at what it already does.
Right, there might be some value in importing your model and then being out of rotate that,
for example.
Exactly.
Like, there are cases where you might want to model some sort of sculpture, let's say,
for example, in Blender, and then imported it in,
and turn it into vector graphics in some way
where you're having these different
like petals or flaily things
that could look nice in a 2D context
but you ultimately have to design it in 3D.
That's an example of where
we would probably have some sort of model import
or something like that, but yeah, that's down the road.
And basically, if there's one thing that graphite does not have...
This is the quote.
If that's one thing that graphite does not have,
it is a lack of ambition.
Yeah, that has been the much-pinned quote
that frequently comes back because essentially, as I was saying, the vision is to support all
possible visions within overall graphics editing or 2D graphics editing.
Yeah.
But yeah, it don't make any technical decisions that preclude the ability to eventually move
towards that grand unified goal of everything, of supporting everything.
Do you have more questions?
Yes.
Otherwise, we're just going to keep on gaping and never good nowhere.
I love the episode where I don't have to say a single.
word and the guest just yapped the entire time.
Okay, so when I was looking around at graphite
and sort of seeing how people feel about it
and what they've said about it,
I'll often see discussions saying,
oh, graphite is kind of like,
it's in this same class, or it's going to be.
Obviously, it's still an alpha
and long way away from being fully realized.
But it's in that same class of things like OBS,
which is the, like an industry stand.
now for doing streaming and recording
alongside Godot for doing
game development, Blender for doing
3D, this is the 2D
graphic side. And all of this
sounds great in theory, but
right now,
where is it?
How far along the
vision? How usable is the
software?
Is it something that you could
reasonably actually use
in a workflow, or is it still a long
way away from that point?
I would say, yeah, if we're talking about sort of the initial thing being by comparison to Inkscape,
where it's a vector graphics editor because that has been our primary focus thus far,
we are in a state where I would say for a beginner who is not familiar,
they might be familiar with Illustrator as like understanding vector graphics,
but they have not specifically used Inkscape to work through its quirks.
It is probably from, this is based on the feedback we've had from many users who have reported.
Yeah, so basically we're in a state where by comparison to,
basically vector graphics editing software because that is where we are at this moment
with our strategy. We have to focus on some things first. By comparison, Inkscape is kind of the
most similar program to what the feature set of graphite is today because we have to, you know,
take that roadmap and build out the roadmap over time towards the eventual roadmap of
there's no end to anything that might be a limitation because we'll have something unified
enough to support all use cases. But now we have to support vector graphics editing.
And of course, many of those use cases, by the way, will be plugins or they'll be kind of
user-authored things, but we're building an engine powerful enough to support everything within
the use cases of what someone might want to author themselves through the plugin system.
But yeah, so Inkscape is, I guess, the most fair comparison at the moment.
And the feedback we've gotten from many users have been using Graphite is that if they're,
especially if they're already familiar with Illustrator or other vector graphics editing software,
but not specifically familiar with the UI and the idiosyncrasies of the UI of Inkscape,
They found that graphite is more intuitive, but we, of course, do still have some limitations, but also some advantages.
So the node system allows kind of like an escape hatch where you do have to get technical these days to use it as an escape hatch to re-implement some features that we don't have the ink scape because it's been around for, what is it, 25 years or something, maybe even 30 years.
You know, they're a really robust and, you know, just many features over the years.
They've been able to build that makes it up to industry standards.
So, yeah, one example will be text on path.
So you draw a path, and then you can draw text that kind of follows along that path.
And we just haven't gotten around to building it.
Actually, it's the kind of thing.
If anyone wants to build it right now, there's nothing blocking it.
You can join our community and get involved in building that as a proper user-facing tool.
But we do have the node system that allows you to recreate those.
So you could make it work on your own.
And there's dozens of examples of like, you know, we don't have like a system of using patterns to fill objects,
but you can make the pattern yourself
and then clip the pattern on top of another object.
So different cases where we don't have the direct feature,
but we have enough tools to go recreate it yourself.
But for regular artists, we don't expect you to have to know those technical details.
But on the other hand, we do present a UI that is generally more familiar
and generally like the kind of thing that,
as long as you're moderately familiar with creative software in general,
we have sort of like a regular unified layout that you'd expect
from any professional graphics.
editing software. And most of our users have found that that is a lot easier and more
approachable than Inkscape. I've personally not managed to ever open Inkscape and do much more
than draw a rectangle because it's just, I found its UI to be a little bit too full of quirks.
I just can't quite get my head around it. I'm sure if I watched some tutorials and
really, you know, spent the time to get familiar with its quirks, I could work around that
and get familiar and useful and used to it. But I want software to be approachable to a
beginner. And yeah, I have really high standards for software design. So as a result, I've not
really found that Inkscape meets my needs in those regards. And I guess more on the status of how
mature is it currently. So we are still in alpha. And I guess one of the biggest drawbacks is that
we and one of things we want to work on next is that we build a stable document format. So
currently we can't guarantee that documents will work in the next version of graphite.
The skills you learn while using the app still transfer,
but the documents might need some manual tweaking to get working in new versions.
That is something we will definitely need to fix before we're going to beta.
And the timeline for that we're expecting is around the end of the year or early next year for that.
That is basically one of our imminent focus areas.
Yeah, that would be one of those things where I would need that to be in place to legitimately use it.
because I've thought about trying out doing, like, my thumbnails in it
just to see how that would go.
But I have a, I have like a layout document that I've got made
and I fill in things with what I need
and having to make sure that, you know, always is working.
I, you know, for all the problems that Gimp might have,
it's a very stable format.
I can take a document from 20 years ago.
It's still going to open just fine.
But one thing that could work well at some point is that we have, like, one of the nice ways about, like, of using this node graph is that we basically run the program.
So you edit your image, you can edit your thumbnail in graphite, and then expose these inputs, for example, the title would be an input.
And we do also have a CLI application, and what you could feasibly do is that you feed in these CELI.
like type for example as a CLEI argument and will just render the document for you.
And the note-graph based approach lands itself naturally to the sort of batch processing
or CLAI argument filling. And it's actually kind of nice because in other software, if you want
to do things like batch processing, so you have a black spot on your sensor and you need to
crack that in 500 images, you do have this extra
automation built on top of the editor.
It almost ends up usually being like a macro system built
as sort of like a bolt-on feature in many
other graphics editing programs and it's never really
like part of the core architecture.
An example with Gimp
for example is you can
you can like there's like a Python interface for it
where you can just say do this action do this
action do this action and it works
it's a perfectly fine system
but it is still
effectively just a macro system.
And what we do
in graphite is that we like
as you're using the tools and modifying the document your document is sort of like a program and we build like the node graph is a visual representation of this document program and the program says like given these inputs process them in such a manner that we get this output result and what you can do is that you like we if you do this for one image like you do the operation for one image we can then take
this as a program we can basically just copy the node that applies this correction and apply
it like you can sort of think of it like having a read every image and folder node then we
map it by applying the operation that we did mainly in the editor and then we save it back to the like
to the disk so we don't need to build the batch processing because it's just an emergent
property of our system.
We're essentially creating what I think could be described as the first programmatic
to do graphics editor, where everything else has been built with a specific set of features
in mind.
It's a graphics editor that has been built to edit raster images.
That's an example of what Photoshop is or what Gimp is.
Whereas what we're building is a programmatic graphics editor, where the idea is it is very
literally, an IDE for the code of the language that is the description of your graphics.
Kind of like how you would write an image magic command or you would write almost like an
open CV command or something to make processing to images happen, except you do that in a purely
visual way. You don't use code, but you actually edit with tools that allow you to draw a
rectangle and use the pen tool to move around a shape and then use the brush tool to draw, you know,
to draw some brushwork, except that is actually coding on your behalf behind the scenes in a way
that allows you to think like an artist, think like a designer, but end up with code being produced
on your behalf. And that code is not visual, it's not textual code. It is the nodes that get
created on your behalf, where nodes are basically functions that get certain parameters. So, for
example, the node that generates a brush will have the mouse positions of where you move your mouse,
so the pen positions of where you moved your pen and also how much pressure and angle was involved
in the pen pressure, and that can then render into a brush.
And the coolest part here is that we're not baking in a specific resolution.
We'll get to this system a little bit in a little bit more time,
but basically the idea is that we can re-render your artwork at a different resolution
and re-export at any resolution you care about at the time that you're exporting a file
or even that you simply zoom in.
So your brush stroke, you draw by recording your mouse movements,
and then you zoom in more, and now you don't have a pixelated brush,
because we re-render that brush again.
So the idea you're creating a program, and there's no limitations such as what is resolution.
Resolution is basically a unit system that allows you to easily have a document resolution,
but that document resolution is not going to result in pixelation, as you would expect it would in Photoshop or in Gump.
Right, so normally what you... Go on?
Yep.
Yeah, the latency.
You guys don't have latency between YouTube.
This is weird for me.
Usually when I have guests, you guys are the ones.
talking over each other.
Please go on.
I was going to say,
normally when we're talking about
2D graphics software,
there is a clear distinction
between vector graphics
and rasterized graphics
where, you know,
you can make artwork
in a vector software
like Inkscape,
but this is a different thing.
Your workflow is all vector
and then you rasterize it
when you're doing your final export, or then you look at something like Gimp or Photoshop,
it's entirely done through rasterized graphics.
But here you're saying, if I understand correctly, you could do a brush stroke and then say,
I don't like exactly how this is done and I want to, you know, add in a curve to this
and you don't have to redraw that.
You could go into the node system and then modify just that at the code level if you're
someone who feels like going that technical.
Or you could like resculpt the brush, for example.
as well. You could use like a sculpt tool to just push around the brush. I assume you could change
the brush to another one. Exactly. Yeah. Change the radius. Change it to like a watercolor brush
instead of a, instead of an acrylic brush or anything like that after the fact. That's what it means to be
fully non-destructive is you're encoding the decisions that went into the creative process. You're not
baking in the decisions at the moment that you drew them into the pixels of the artwork because
there are no pixels. There's only a program. Yeah. With this idea fully realized, I'm not
artistic enough to understand
what it could be used for, but at a
surface level, I think, oh, you could
draw an image and
then completely change
the style you have it in
without redrawing that image.
And I'm sure people who are artistic
and have, like, a technical background
can think of far more interesting things to use it for,
but that in and of itself
is already, like, really cool.
Yeah, and also, like,
if you generally, like, if you, with most other graphics software, you pick a document resolution
up front. So the first choice you have to do is to pick the resolution. Maybe you're on
constrained hardware. You only have a laptop available, maybe a school laptop. And so you choose
a smaller resolution to work on your image. And then the image becomes really cool and you want
to export it and print it on a billboard. And that just doesn't work because now you don't have
enough pixels. And what we do in graphite is basically you only pay for what you use. So we can
render the image always at your current viewport resolution. So if you have a bigger, like if
you, we record the movements and we only render what's currently visible. So you can always
export it at a high resolution and that's just an export that's going to take longer, but it doesn't
impede your workflow. That's one of the like cool visions of this adaptive resolution system.
Or the other funny thing you could do is you can go hide Easter eggs inside a piece of artwork
by zooming into like one million percent and go hiding a little doodle or something somewhere
you'd never expect it. Yeah, if you know these like sometimes on YouTube you can find these
infinite zooms into artworks. That is something you could also do. Yeah. You start out like a million percent
zoom and you just start drawing and then you just start zooming out a little bit as you go and just
continue to to draw more because the brush system again it's only recording your mouse movements
and it's rendering it only when you're viewing it yeah i had uh cut you off before um was there
something do you still remember what you were going to say before or did you lose it um i mean we can
i guess we can so what i was sort of um starting to explain is how this works like if like for example
draw a box.
This is going to be a bit more technical, but...
If I draw a box, that creates a node.
And that node is just a note that returns a rectangle.
We then feed that node into another node, which gives the rectangle a stroke.
So, like, an outline.
We then feed it into another node, which gives the rectangle a color.
I feel color.
Yeah, a fill color.
And that's how we can...
Like you did your operation, you drew a rectangle.
But now we translated into it into this set of like operations.
And if you want to change the color, you still have the same geometry.
You can reuse the geometry from the rectangle node and just go to the fill note and change the fill color.
And that's sort of the mental model we work with in graph.
And of course, the important part is that the tools did that for you.
You could go to the node graph, add your own rectangle node, then add your own stroke note and your own fill node,
and set up the color you want for the fill and the stroke, set up the width and height for the rectangle,
but the tools can just do that for you, and it's much, much faster to just draw it like you would in any other graphics editing program.
You just draw a rectangle. It's truly that simple, and the system adds that node, that set up nodes for you, and it becomes a layer.
The important part of we still have the layer system as well.
So the nodes feed into a layer. Essentially, you can think of layers,
in other graphics editing software, they just internally represent in whatever internal data structures that the software developers created for their use case of, for example, vector graphics or raster graphics.
In those cases, they've built some internal data structure, but you don't have any control of what that is, and it's just storing whatever you drew.
But in our case, the layers are more like outputs for the data that the node graph generates that either you don't see or you see, depending upon whether you open or don't open the node graph.
So they're more of outputs for that data that gets generated along the way, and you have, therefore, more control over that.
But at the same time, it could be exactly equivalent to other software if you choose to never open the node graph.
And I think one of the sort of natural transitions we could do here is that we did already talk a bit about the sort of adaptive resolution idea.
So we don't pick up front, but rather we record, like basically all the,
user inputs and we can render it at a different resolution.
What we can also do is that we, and I did also mention this, we only render what's
currently visible.
So previously I described that basically the entire thing, like the entire document becomes
a function with inputs and outputs.
So it's a program that takes some input and returns an output.
One of those inputs is the area you're currently looking at and the resolution.
So the current viewport position is an input to the program.
And in that sense, graphite becomes basically more like a game engine.
Because on each frame, we have to render the artwork with the current camera position.
You could sort of think about realizing this as a sort of 3D map.
If you place your image in 3D space, we have a rectangle in 3D space and you can then
move your camera.
So in a sense, we're much more similar to how a game engine would work.
that does naturally lead us to one of the biggest concerns we always have to worry about
which is how do we make all of this fast and right because it's it's nice to be able to do this
but you if you do 5090 for it it's not exactly yeah it's cool like it's technically cool
but it's not viable and we want to work on low end like chrome books and things in classrooms
In fact, the classroom market, even today, actually, we've been looking through our analytics and seeing about, I think, around 20% of our visitors are coming from classrooms during the weekdays.
So it really is actually an important market, and that's only going to grow over time as, you know, the skills that are taught in classrooms need to be more and more technical and more focused on creativity as opposed to just the traditional subjects.
So I expect that to grow, and we want to make sure that we're as accessible as possible to all users, whether that's on Chromebooks or whether that's on 5090 with a nice,
beefy gaming computer.
We want to support everything in that entire hardware range between those.
And of course, you mentioned like basically a server GPU there as well.
We also want it to be able to, if you're like, if you are an actual film studio,
which normally has to, you know, produce graphics, like 3D graphics that usually go
off to a render farm, we even want to support a case where because we're basically building
this generalized graphics engine in the same way you think of it as a game engine,
we want to be able to generate something that is so computationally expensive,
if you're crazy with it and you're generating, you know, like you're importing like a terabyte of data that's used to do data visualization or something.
We want to support a case where we literally are the software running in a render farm to render frames offline in some case where it takes 15 minutes to render every frame like you would for a 3D movie or something.
Right, right.
So we really want to support the entire gamut between actual render farm hardware down to Chromebooks.
And actually talking about where you use graphite,
Currently, we have a web app.
And we are currently, we are also working
on the desktop release.
That's going to come soon.
And we'll maybe talk about that a bit later.
But first, the desktop, the web app.
We, it does use JavaScript.
We do use spelled for the front end.
But all of the logic is written in WebAssembly.
Oh, well, it's written in Rust and compiled
to WebAssembly.
So our main editor has always from the ground up built
with having a native desktop app in mind.
And we use Rust for basically everything that matters
and only use the web for the UI,
because the web is just good at UI.
And that's sort of the tech stack we're building towards.
And yes, I needed to mention that it's written in Rust
because we are obliged by the Rust Foundation
to mention this.
We're not actually obliged by the Red Foundation to mention this.
It's also important that, because I know people are going to see it has a web app and immediately
write it off, it also is important to mention that that is what was done first, that is not
the final goal of the project.
And it's not...
I talked about strategy earlier, and it's really about the strategy of finding what battles
to fight at what time in the roadmap, while eventually knowing that you will win every single
battle.
And we even have a roadmap towards eventually getting rid of any web technology at all further
down the road by having our own render engine be so sophisticated that can render our own
UI in itself, where you can update the appearance of a button in your own editor while the buttons
all throughout the rest of your editor update live while you're editing that one button.
So it's, yeah, that's the other thing.
The intention is also not to be, I know people are going to immediately think because it is
on the web right now, it's going to be an electron application.
Yeah, no electron.
No electron.
To be clear, it is built as a native app using a native app, using a native.
language that gets compiled into a format that simply is capable of being hosted within a browser.
But it is a native app running in a browser, which technically, based on how you want to define
things, you could call it a web app. But I think of a web app is more of something that's written
in JavaScript. And aside from a tiny bit of UI code, it's like 5% of our code base that just
accounts for like, these are the buttons you can press. These are the drop-down menus you can
select from. Those are just really lightweight things. We've intentionally built everything to be
as lightweight as possible.
We're not really using any libraries,
aside from like our UI framework of Svelte,
because we know that libraries typically
add like 10 times as many elements to the HTML DOM
compared to what you need, and we don't want that.
So I want to build my UI that I designed
as part of the UI mock-up in as few HTML elements as possible,
using as few lines of JavaScript as possible or TypeScript,
just to be that like 5% of our code base that involves
just the things you see, the things you click on,
the things you interact with surrounding your viewport,
knowing that you spend most of your time in the viewport, drawing and painting,
not actually touching buttons and things,
because those ultimately are not latency-sensitive.
They are, like, the UI, if it's written well,
if it's not written with thousands and thousands and thousands,
maybe hundreds of thousands of lines of JavaScript,
like you might expect Slack or Google Docs or Teams or, you know, whatever,
desktop applications that tend to be, or Discord, yeah,
they tend to be rather bloated and have a high latency,
and that's just because they're running through a tremendous amount of JavaScript,
Whereas in our case, we are running through maybe, like, at most, 100 lines of JavaScript
when you click on a button or something.
And that's just instant, like nanoseconds, not milliseconds.
And then it's running native code after that point.
In this case, in WebAssembly.
But as soon as we have that desktop app, it's running directly Rust code, so compiled code.
And the reason we say Rust is because it's important to know it's not JavaScript.
It's rust, which you can basically think of as equivalent to C++.
It's just we're starting a brand new project, and Rust is the more modern language to use instead of a 25-year-old C++.
So that's why we chose that.
It's mostly for maintenance reasons that we can make really large-scale applications
and have the compiler make sure that it knows that we are not making mistakes
that C++ would have a harder time catching.
I was about to ask you why the choice of Rust, but you pretty much were you answered it, though.
It really is just about maintaining a very large-scale code base.
It's easiest to do that in a modern day in Rust.
We care about high-quality code because this is going to live for decades.
And as a result, it's going to do a better job than C++ will at making our lives easier.
And another user is, or another reason is that we need basically the full range.
We need both low-level implementation, and we need to be able to optimize things because
it's going to be very performance-sensitive.
And we also want to use one language and have the full range to application scale and
have all the abstractions we would want from the language.
It's kind of the editor-UI.
It's kind of the first programming language that allows you to have low-level speed.
like basically C++-style low-level systems programming with high-level conveniences,
which is not something that C or C++ can deliver on.
They don't really provide those high-level conveniences where we get to mostly feel like
we're writing in something like Python or JavaScript or Java where it's like a high-level
or C-sharp where it's high-level conveniences, and we get to therefore maintain a really large-scale
code base and not have the burden of, you know, worrying about when things just totally
collapse because the jingot tower you've been building for the past 15 years has just collapsed
from beneath you, as it would in C or C++ where the moment you've got to go refactor some
big system, it just becomes this nightmare of finding bugs for months and months and
users hitting crashes because you changed the foundation out from underneath you.
And that's also like if we get new contributors and some people just add one feature
that they, like that was annoying them and they make it for it.
It's easier for us to have confidence that their implementation won't break other things.
if we know that it's compiles we know it doesn't introduce any undefined behavior that's always
like a good start and then we just need to check if the if the logic actually works
also the one last thing that can't be under accounted for is that rust as a language because it's
newer and it's a little more like fancy and stuff it ultimately attracts really smart people
so dennis is like actually a genius and uh we would not have found you know starting the project
we would not have encountered dennis if it were not chosen uh if we were just using
or something that has, you know, heritage as a decade, multi-decade language that
thousands or, you know, millions of people know, it doesn't have the community built around
sort of a niche of really, really smart people who have self-selected themselves into a
group of the people that we can ultimately recruit as open-source contributors.
I don't...
Oh, speaking of open-source contributors, I want to also remind people, because oftentimes when
they see our application and they're like, wow, this looks super professional, they write
off in their head that this is actually just an open-source project.
I want to take this moment to emphasize that we are an open source project, not a corporation.
We are not backed by anything except donations from the community.
Apache 2.0, is that the entire project? That's what it says on GitHub.
That is our license, yes. We want to make it as permissive as possible because we know that we want to be used by corporations.
We want to be an ecosystem standard, essentially. So we don't want to limit how anyone can use it, where, you know, if we were using a copy-left license like GPL, it would basically be harder to be adopted.
and we actually want to be adopted as much as possible
by as many users, including corporations as possible,
so that we can basically become this standard of the industry.
Is there a reason why that specific one,
as opposed to like an MIT or a BSD3 clause, something like that?
They're pretty much exactly equivalent.
Sure. Okay.
It's at the start of the alphabet.
There you go.
There's your reason.
Yeah.
Yeah.
I'm not sure if you had like any actual reason why it was specifically that one
as opposed to any of the other permissive licenses.
Yeah. I guess if I remembered now, I probably would have made it exactly a dual licensed MIT slash Apache, but I just picked Apache because no reason at all. They're pretty much exactly equivalent.
And I think they both allow relicensing. Yeah, exactly. Yeah. There's no actual distinctions between the two except for like specific legal terms or something that aren't actually relevant.
Yeah, I think there's slight differences in attribution. That's pretty much it. But the actual permissiveness of
the license really doesn't change. Yeah, like someone could relicense the Apache code as MIT code,
I believe. So anyone else could just take it and then do that with it. Yeah. That's it, of course,
with our license, our goal really is to make this the standard project. We want to be the clear
organization running the project. We don't want to go have 500 forks doing their own little,
creative, cute little thing. We want to make the software so flexible and extensible through our
programmatic engine, essentially, that you build all of your use cases because we support every
possible use case by making the generalization of everything in one program. So that way we really
don't want to have a bunch of forks all trying to compete with us in some way of like splitting
the ecosystem apart because we've seen other open source projects where they don't have clear
leadership and clear organizational standards. And they end up having tons and tons of forks
that ultimately kind of just have everyone not moving in the same direction. They have everyone
moving in like opposite or adjacent or you know, perpendicular directions.
WordPress drama. Yeah. So we ultimately want to be the one organization and we intend to continue
doing a really good job at running, you know, running the organization. But we basically would
like people to continue to contribute to the project and move, help us move forward. But that said,
we do have also a very permissive license. So if we ever, for some reason, decide to drop the ball
on that, then anyone can go take up exactly where we left off and continue on.
Yeah. One thing we didn't get to, and you started, if you want to contribute, if you're interested in
graphics programming and Linux and want to use, or maybe you want to try out the desktop app,
go to graphite.r.r. And there's a button to launch the app or a getting started guide.
And yeah, join this available today. I was going to ask, I don't think you said it during
your introduction. You might have, and I might have forgot. What is your actual educational
background, Dennis? I study computer science. Well, I'm currently doing my
my master's in computer science at KIT in KALSRU in Germany.
And in my case, I went to Cal Poly San Luis Obispo and I did my undergrad in computer science
and then also got a master's in also computer science and then graduated a few years ago.
The idea now is like when you're early in your career, that's the right time to take a risk.
I'm probably giving away like millions of dollars in terms of opportunity costs that I can
be making right now in Silicon Valley, which is where I grew up and where I live. But I really do care
about the open source community and I want this to be my career. I would not be happy being
some individual contributor at some other software company. I want this to be the thing I do for my
entire career. So I'm kind of putting, you know, putting a lot of risk personally towards making
this happen, knowing that in the long term it's going to pay off for me making a career I'm
really happy with. And yeah, and hopefully that also means that we can get to a point where it's
a sustainable like venture once we eventually have some business models that are completely still
in the spirit of open source, will be able to, for example, allow you to host your documents
on the cloud only if you want to, because you want to, for example, switch between an iPad
to do drawing and your desktop computer if you want to mix between painting and not. But that's
not a thing we shove in your face. That is a thing that you could choose that if you read about
the fact that we support an account system that you could upload, we don't even have any servers
at the moment. But in the future, once we eventually have that, the cloud sinking between
devices is a way that we could have some sort of revenue. And once we expect to have millions
of users, you know, that becomes enough to have hopefully a sustainable business. I also mentioned
the asset, or I did not mention the asset store yet. I don't think you mentioned that, no.
But I mentioned plugins. Like, the goal is to be as extensible as possible. We're basically
making a package manager like MPM or PIP or or creates that I.O.
Well, I guess in direct comparison, if you've talked about game engines before, Unity's assets
store. Or the Unity asset store. Exactly. Yeah. And the idea is that the idea is that the
idea is, you know, really you're making something so extensible that people make all of these
reusable things. And there's a big market, I'm sure, eventually for, like, even in the Blender
community, there is, like, there are several different asset stores that are commonly used. They're not
officially run by Blender. I do think that's a missed opportunity by them to not capitalize on being
the official source of people selling assets to one another. And of course, giving them away
for free. I would expect most people to be giving away their assets for free. But if we do
payment processing, then I think it's fair to take a profit cut from that and put that back
towards the project, hiring people.
Our goals are just so ambitious that we need to have a giant team someday.
Yeah.
And it would be good to really have a business model that is sustainable instead of relying long-term
only on donations, which is a situation that Blender has found themselves in a favorable position
for because they do have large corporate backing because they ultimately are kind of a standard.
They're used as a benchmark by NVIDIA and AMD and Apple for all their hardware and Intel.
So those companies back it because they want to see Blender be like the best.
on their own on their own hardware, but that's a position which maybe we can someday move ourselves
towards, but I don't think that's something we can rely on, and that's going to take a very
long time to reach that point. I also mentioned like we would act as a render server for render
farms. That might be something where if we need to have like an orchestration system across
a thousand different machines, we can maybe, because it's such a niche case that's only used
by big companies, we could maybe charge for like that specific kind of orchestration system where
no users need that, but we really are focused on keeping it. Anything monetization-wise,
is not shoved in front of your face, ever,
and also not in a way that detracts
from the spirit of open source,
because this is really what we're trying to build
while still being able to not only rely on donations
longer term.
But at the moment, we rely entirely on donations
because all of our eventual business model options
can basically be things that we can't do in the next few years,
because they just require so much technological build-out,
and also a sizable number of users
before those become viable at all.
Like the asset store is probably the soonest one we could expect,
but we still have to build a good amount of technology.
And we have to actually start building a back end with hosting and infrastructure.
We have no hosting at the moment.
When you go to Graphite, you were visiting just a CDN, just hosting static assets,
meaning that we don't have to pay for anything like that because it's free to host
just purely a website with static assets.
But as soon as you get servers and databases and computation on the cloud,
and then it becomes expensive, and of course, that's something that we'll have to be able to support
and scale up and maintain balances and load,
and it gets all very complicated in terms of both maintenance
and cloud costs and things.
So that's going to be a transition that we probably are going to need to hire someone
to work on full-time as part of their allowing us to go from what we're at now.
It's just purely a client-side application
to eventually having those kinds of infrastructure for, like, the asset store and stuff.
That will be a transition up in the future.
but yeah that's kind of a word on business models and how we intend to make this project as big as it can possibly be towards you know making that ambition realized well since we're still on the topic of business models what is the actual structure of the organization that runs the project yeah so basically we want to keep as little overhead as possible so i've simply formulated an LLC or organized an LLC that i'm the sole owner of and that just allows us to have a bank account so i can
you know, by, like, my flight here, for example, to Germany.
I'm normally based in California.
And so far, I've had to put in some of my own money to get us to this point,
but I'm hoping to get that money back through donations.
And also, Google Summer of Code gives both grants for our students to basically hire interns
for us over the summer, and they also give our organization a bit of a stipend as well.
So with that plus donations, I think this year,
we might very well be slightly positive instead of slightly negative like we were in previous years.
But yeah, basically, I'm just the sole owner of the LLC.
And it's not really a company that's, it's not really a, yeah, a company that's run in a for-profit way,
even though technically it's not a non-profit just because there's a huge amount of legal overhead and requirements and things to run that.
And I am not a lawyer. I am a programmer and I'm a designer.
And I don't want to even think about any of that stuff.
It's hard enough to just run the basic stuff.
we could always become some sort of non-profit foundation in the future.
It's just a matter of what kind of priorities make sense.
I think right now the priorities should be getting something that people are wanting to use
and doesn't have a constantly changing file format and, you know, I understand why you're in the position
you're in.
Like, it makes sense while you're doing it the way you are.
I know some people want it done perfectly from day one, but...
Yeah.
You know, I totally understand why you're coming on it from the direction you are.
Yeah, it's back to that analogy of painting a painting,
and it takes 100 years to paint a specific painting,
and you can either paint to that 100 year painting square inch by square inch,
and it's completed detail, which is basically impossible,
and people have to wait 100 years to see your painting,
or you could do incremental detail levels strategically working on specific areas.
Maybe you get to see the face before you get to see the body,
and the face is what you care more about than the body,
but you ultimately have to pick what you're working.
on first. And we've so far picked on, I think a pretty good, I wouldn't really do many things
differently if we were to have the benefit of hindsight. I do actually think our strategy has been
quite on point. But, you know, we've really focused on first building an editor, just like
something you can interact with at all. So the UI, the buttons, the tools, that kind of thing.
Then we went on to move towards building the graphics engine. So we replaced some of our temporary
vector editing tools that were all just sort of rudimentary vector editing tools with ones that
that would use the node-based graphics engine.
And that graphics engine, this is graphene.
So graphite is the editor and the project.
Graphene is the engine slash language slash.
You can think of it like the Unity editor is Graphite
and the Unity game engine, like the thing that compiles your game
is the engine here and that's equivalent to Graphene.
So that's kind of what we're building here, two separate technologies.
and graphene is just as ambitious as graphite.
They go really hand in hand,
and we have to build both of them,
and graphene is going to be a 20-year kind of project
to reach its full final ambitions
in the same way that graphite will grow alongside it.
And we can't just build graphics editing
without building both of those together.
Okay.
I want to jump back a little bit.
So we talked about node-based editing,
And I think it's fairly self-explanatory, but just for anyone who doesn't understand, when we say non-destructive, what do we actually mean by this?
So let's say you are going to draw a shape, let's say like a, let's say you're making Pac-Man.
So you draw a circle and then you cut out another circle from that circle.
You're creating like a pie, or sorry, like a crescent shape.
So you might start out with two circles that you have to draw both of them.
Then you have to cut one out of the other.
and the exact position of both circles in relation to each other
will result in the final crescent shape that you made.
In other graphics editors like Illustrator or Inkscape,
if you are going to draw those and then cut one out of the other
doing a Boolean operation to subtract one from the other,
that decision about where you placed them,
what those shapes look like,
whether they were even circles to begin with
or if they were some other shape like a star
or anything you drew with a pen tool,
those are all decisions you made,
and the operation of cutting one out of the other
was a destructive operation.
you have permanently transformed two layers into another new resulting layer.
That operation is ultimately a function.
And in those other pieces of software,
in those other pieces of software, you have,
you perform that function,
that operation in the editor once.
At the very moment you press the button to do it.
In Graphite, what that does by comparison,
and this is what makes it non-destructive,
is that you have added a node to your node graph,
which is that function.
So that's the function that does the operation of cutting one out of the other.
It is the, it's encoded into your artwork permanently from that point.
Obviously, you can go delete it.
But the operation is not, I made the decision to cut it out once, and I've transformed my data permanently and lost the original layers and got a new layer in return.
Instead, it is, I have replaced, you know, I've put together two layers and then combined them into one with basically, if you think of it like a flow chart, because the node graph.
is basically a flow chart. You take the two layers, make them flow into one layer, make them flow into one operation. That's the node that does the Boolean operation. And out from that, you get a resulting layer of those two combined. And now, since those are permanently part of your flow chart, unless you delete them or something, that operation, now if you can modify the crescent shape, like modify either of the two circles, you can create live, like every frame while you're dragging it around. You see the resulting Pac-Man shape or crescent shape or whatever.
And you can just drag it around live and see it update.
So that ends up being really powerful.
And then you can go non-destructively add, let's say, a bevel or like a round corners kind of operation.
And that node that provides the round corners can then go and add like a rounding to where the Pac-Man symbol got its crescent corners.
And then you can still drag around the circles.
And those crescent, you know, the crescent updates and the, sorry, when I say crescent,
Pac-Man is a pie slice, not a crescent.
But anyways, my point remains.
Cutting a triangle out of a circle is what I meant to say this entire time.
But yes, basically you can update it or even animate.
So this is the important part because everything is parameter driven.
Exactly.
You can animate the opening and closing of the Pac-Man symbol every frame.
And then the operation that does the bullion cutting of one out of the other can animate over time.
And you get the resulting shape out of that.
Now, with these nodes, is it just...
a linear connection of nodes, or can a node have multiple inputs into it?
So say I have a bevel that I want to apply, and I want to apply this to multiple different elements in my image.
Am I able to send multiple things into that bevel, or do I need to duplicate that node?
Okay, that's two questions.
Yeah, that's sort of two questions.
There are two answers to that.
One is that you would want to duplicate that node,
because the bevel takes a single input
gives a single output
and if you wanted to do it to two independent shapes
you would independently provide that
although we do have some designs in mind
for kind of like
creating like a placeholder node
that can be driven by the definition from a different node
but we haven't really designed the exact solution to that
but that is a case where we're aware of that
being sort of a common use case
but what you would do right now
is you would basically make multiple
bevel nodes but you feed
the bevel radius parameter
from a single number, and you feed that in to multiple times.
That's a multiple of the bevel nodes.
And I guess what you could also do is that you just combine both of them.
Yes.
And then you can apply a bevel after, like to the combined output.
Sure.
Okay.
If you have, like, we can have a square and a rectangle and, well, a circle and a rectangle,
and both of these are one input set of input data,
and you can apply a bevel operation.
to both shapes, essentially.
Okay, okay.
That makes sense.
I was just, like, I,
my main point there was that
there are going to be cases where you want to
make use of the same effect
across different, uh, across different things.
And just making it clear that there is some way that you can do that.
There isn't just, you know,
copy, paste it a hundred times to everything you want to apply it to.
Yeah. And, uh, that's like you can,
One of the things you can pretty easily do in graphite is, for example, generate patterns.
If I, let's say I have a circle, and I then instance it by repeating it 10 times along the x-axis.
Now I have 10 circles.
I can also repeat these 10 circles, like 10 times in the vertical direction, and now I have the grid of 100 circles.
And I can then do operations on these.
Like on this writ of circles, that's something that's very easy to do in graphite, and you don't need to draw 100 circles.
Yeah, this is an example you have in the documentation under the learned section, the idea of doing procedural design.
Yeah.
And of course, you can go update that circle and change its radius, or actually replace the circle entirely with a star or with any other shape that you want to make.
And you keep all those other effects, all the repeating...
part of that process. And it allows you to then, you could also go target one of those circles and make a change just to that one circle. And then you could still change the number of repetitions in the horizontal or vertical axis. It gives you the ability to modify at any point in the pipeline because you're basically in the process of drawing things. Normally you just draw destructively. But in this case, we're creating a pipeline out of the creative operations you chose. Like, I want to repeat something. That would be a menu that you do once in Illustrator or Inkscape. But in our case, we encode that.
operation that then becomes editable later on and then you can still make your subsequent
operations you can still edit a specific resulting shape out of the hundred that got created
and that allows you just yeah target specific pieces or it also just generally saves you
time because you know copying a circle a hundred times that might be a common enough use case
that illustrator or ink scape they've got a menu that allows you to just do that you know
it's a tool that they've built that does it one time but in many cases
you need to be able to combine together mini tools to produce some resulting shape,
or some resulting visual style or appearance or output.
That is not possible using the existing tools that have been built into the menus and different things.
Or maybe they had to hide it behind so many menus that you were not aware of it.
Right, right.
You just never learned that.
And it is another example of building automated automation automation on top of the editor.
like automating the editor into action
and what we do instead is that we
just treat it as a program
and we can just duplicate it and that's fine
we don't need to build UI for that
So like one of our examples if you actually go to our editor
and you open up some of the demo artwork
we provide some demo artwork that you can use
we've got this example of a Christmas tree
where we've decorated
we've decorated the Christmas tree
with some Christmas tree lights
and if you were to draw that by hand
you would have a bunch of light bulbs.
You've got to copy a bunch of light bulb all along the distance,
you know, the arc length of the Christmas tree light,
and that would be tremendously painful to do that
because you'd have to like take literally a ruler
out of your desk drawer, take a ruler,
measure the distance from one light bulb to the next light bulb
and then to the next light bulb on your screen
because there really aren't any other tools built in
to Illustrator or Inkscape that I'm aware of at least.
Maybe there are.
Like, there's the ruler tool.
I guess, yeah, but wouldn't use, need to use a physical ruler, but...
Yeah, maybe you end up with, like, 50.3 light bulbs by the end of your, by the end of the strand.
And I'm like, oh, crap, I ended up with 0.3 at the end.
Now I've got to, like, go undo my work and go, instead of making it, like, every inch apart,
got to make it, like, every 1.05 inches apart.
And then do the whole thing again, until they line up nicely.
And then you decide you want to modify the tree or the light just a little bit,
and now your light is a different length, your wire is a different length,
and you've got to update everything again.
So that would just be a use case that is super difficult and painful to do,
and it would not look as nice because you never measured it perfectly.
Whereas in this case, you have combined together, I don't know, like three nodes or something,
one of them basically takes your wire, the path wire,
and splits it up into basically a polyline.
It splits it up into equal length segments.
And then now you've got basically a point equally along the wire.
And then we copy a light bulb once.
We design the light bulb once, copy the light bulb,
and then it gets placed automatically everywhere that those points were located.
So now you're reusing the wire both to display the wire visually
and also as input to the data for the operation
that dealt with placing the points.
And then finally, we have another node
that can take all the resulting wires
and apply different colors to each wire incrementally.
So you can then define a gradient and say,
okay, along this gradient, we go from like red to green to blue,
and it recolors every wire incrementally.
So you could either have it like every three
where it's like red, then green, then blue,
or you could say across the entire length of the strand,
we slowly transition from red to the intermediate colors
to green to the intermediate colors to blue.
And that just makes it so you.
don't have to do that by hand. And I don't think there are any tools that will incrementally
change the color of every object that exists in Illustrator or an Inkscape, because that is just a
use case that they never envisioned you'd need. And if they did, it would be really hard to define
that in a way that exists as a tool. And there are so many specialty ways that you could make
these different tools that they could never, like, design a UI for every possible thing out of
thousands of use cases that they might anticipate a user might need. Whereas in our case, you get to
combine together these little operations. One of them was like an operation that split apart a wire
into mini little segments, and another was copying something else onto a segment. And by combining
together little pieces into slightly bigger pieces, you build up a pipeline that allows you to
learn little pieces of an overall system. And of course, there's some learning curve knowing
what's available, what operations you're able to access, what nodes. But you get to do so in a way
that's kind of incremental, and this is how we, ultimately, the really important part,
avoid the curse of feature creep, or feature creep or feature bloat, I guess is the word I
mean, where a piece of software just becomes bloated with so many different menus and buttons
and things where it was never designed for all this. That is the problem with big software,
where they just keep adding more and more features to it. You never know what features exist
hidden behind so many menus because you just never encounter them. They're just so hidden.
They have to be hidden because there are just so many of them.
In our case, we flatten out that complexity by just having all these little atomic operations,
like little pieces, elements of an overall molecule that you can combine together and make the things you need,
and then you can make reusable systems out of them.
Combined together the common operations into slightly more complex operations that do higher-level things,
like create light bulbs for you.
Or, you know, you could change that light bulb by just updating, you know, drawing a different shape to the light bulb,
making it icicles hanging from rafters instead of light bulbs hanging from a tree,
you know, reuse that system.
And, yeah, basically that way we avoid, we flatten out the complexity to make software
that is, despite its extreme power to do much more than is possible in Illustrator or an Inkscape
or in Photoshop or in After Effects or in basically every program that you could specifically
think of where it's built for one specific purpose since we've built a game engine, but a graphics engine.
It is capable of making, you can make any game that you would ever want in Unity or in Godot or in Unreal Engine.
They're not limiting you to one specific thing, but of course it requires a lot of programming.
What we're bringing here is the tools that can do that programming sort of on your behalf.
You don't have to touch the code.
And on that note, on programming note, we do also support some sort of programming use cases.
So one demo we have is a FISBuzz program.
Busbus classical test for computer scientists where you get the numbers divisible by three, we output FIS, if it's divisible at five, it's we output buzz, if it's divisible by both, we output FISBuzz.
And you can do that in graphite.
We do have notes.
For example, we have a switch note, which allows you to, depending on if there's a Boolean is true, use this output or this output.
And this allows us to do some pretty cool stuff.
So one of the examples, one of the demo artworks I did for the Fostom conference in 2025,
was I made a seven-segment display in graphite.
So I took a year input and as a slider.
And by a slider we actually mean like literally use a circle in the canvas with a box, like a horizontal rectangle.
That's the slider and you get to move it left and right.
And then we use the like centroid note to get the position and we can just use the
the position as an input, or our seven-second display.
And this allows us to build, like, a counter in graphite.
And since then, and since we've added animation,
other users have made a real-time clock, for example,
with a seven-second display.
And that's also really nice.
Now, I'm not suggesting someone...
I'm not suggesting someone does this.
Yeah, but I'm not suggesting someone does this,
but someone could, if they wanted to, try and make a game with it.
A simple game.
You shouldn't.
So here's where I...
We'll need to talk about this because I have doubts, but...
Yeah.
So this sort of...
I mentioned at the very beginning of this that I sort of got my start in Flash.
That was kind of my first real experience, working with interactive graphics and games and animation and things.
my sort of formative years in computing
and Flash never intended to be a game engine
but by the end of its run
people would have described it as kind of the first
generally accessible game engine
this was before Unity really became popular
or available at all actually
and you could make web games
mostly focused on 2D
I guess they had some amount of 3D but it was almost entirely
2D but it was a 2D graphics
sorry a 2D game engine that was never intended to be a game engine
it was intended to make like little like slideshow
kind of presentations or like slightly interactive
buttons or like, you know, little graphics, things that back in the day could have been
equivalent to, like, do you know the name of what Apple's? Like the program that missed
MYST that was originally built in. It was like this interactive way of like going between
different pages and clicking buttons that bring you to a different page and that has some
information, some either visuals or auditory or text, and you can click on a different
button and people ultimately turned those systems into full-on adventure games like Mist.
And then eventually that sort of idea became Flash and it's like, oh, we can embed them on
websites and oh, we can, turns out people can start using them to make little games.
And then, of course, they started adding more and more, more and more features to that.
So by the time Macromedia stopped running that and I guess it got bought by Adobe, then we
ended up having continued to, you know, it basically became more and more of a game engine
until the end of the era, I guess in the early 2010s
or late 20, yeah, early 2010s
when that sort of heyday ended.
And that was when I was growing up playing lots of flash games.
I loved that era and I have a lot of nostalgia for that era.
But then Unity took over and Flash faded out of existence.
It still technically exists today,
but it's not used for really anything except making just animations.
I think it's used for like, it might be used for,
might be used for animating,
um,
South Park, but I could be wrong on that.
Anyways, it's used for some actual animations used in like TV, but it still exists today.
But it's called animate now, except it really isn't used for games because that sort of moved
on to Unity as kind of the main game engine. And a lot of it also we've lost 2D as much.
So unfortunately, there are not as many 2D games in existence these days. But the ultimate
byline here is that they started out with something that wasn't a game engine. It was just an
interactive graphics engine. And they ended up with people turning it into a game engine.
And in many ways, we're actually kind of reviving that same idea where we're kind of making
a graphics engine, which has some interactivity. You can do animation. You can do real-time input.
One thing I want to hook it up to is audio inputs. So you could like do interactive audio visualization.
You can hook it up in the future also to like a webcam input and start doing like machine vision
processing on the image to like, you know, hold out your hand and then like have it process where your hand is and like put some kind of
augmented reality thing over your hand or do like interactive compositing or even like take your
Twitch stream and a donation comes in through a web hook API. We notice that there's this person
gave a donation for this number of dollars. We generate like a visualization that pops up and says
thank you to this person and it processes the feed in real time. And you can build that visually
in graphite. Yeah, there's no code involved. Exactly. And then ultimately we can also then have a
basically a play mode where we stop directing your keyboard input to the tools that you use to draw.
In the same way, you hit play in Unity to play your game.
You could play keyboard input and mouse input going to the actual program that is your node graph system
that does interactive graphics.
And you can basically maybe start making like small games and then we might have more features.
I am watching the Gia's turn in Dennis's head right now.
He's like, oh, God, this sounds incredible.
He has objections to the effort involved.
So the thing is, like the graphene language is purely functional.
And that's partially because during, like, during the time that I came up with the initial ideas and developed graphene,
I was taking a course on functional programming language and other language paradigms.
And programming language, like functional programming lands itself very well to this node graph use case.
Because we can paralyze the entire node graph.
execution. We can cache things. Our notes are required to be item potent, so basically pure functions.
So everything, like it's, we don't have state in the node graph. And that is a bit at odds with
the game engine. So what you need for a game engine is you can't just use graphene. You need an
editor component which holds the state and then we can render the state. Like we can be the
renderer for a game engine, but there won't be the interactivity in the node graph
because it wouldn't be a functional, like it wouldn't be a pure function, essentially.
And that is something where we can build infrastructure and tooling that, yeah.
That said, we will have sort of a node that can act as kind of like the, the holder of a
bake, and you can bake something like a physics simulation.
You bake frame one, then you bake frame two, then you bake frame three, then you bake frame four,
and so on, holding what continues to grow as a larger and larger amount of state,
and that becomes an update to the graph.
So the graph now holds this previous state,
and that way you can have your full physics simulation running for the 500 frames you might have generated it for.
And that way, you know, you can do like sloshing liquid or some kind of, like, cellular automata
or some kind of interesting visuals and...
I guess that's also maybe a good point to talk about how animation is...
implemented like we do support some animation and the actual implementation of animation was very simple
like it was a pretty small PR and basically what we do is it was an immersion property of an
existing change yeah exactly and that's how we always wanted to design the system so basically
we have a program and we can say I want to render this program at this viewport position and
resolution. But we can also say is we want to render this at this point in time. So again,
getting a bit more technical. Technically, the entire Graphite document is a function, and you call the
function with like the position and the time information, like the current time. And then the main
function calls its subsequent functions. We pass down the time as an input. And how you enable
animation in graphite is that you just have a note, which returns the current
time. As a number in seconds.
As a number. And you can then use that number to, for example, modify the rotation of a spinner
or something. Or you can run it through a sine wave node in that way it takes everything from
0 to 360 degrees and outputs from negative 1 to 1 over a continuously varying smooth wave
sequence. Exactly. And that's sort of the, like currently it's pretty technical because
we just have this note and you can basically do
anything with it. But it would, of course, be useful to have more tools. And that's something
we will work towards... Keyframe animation. Yeah, keyframe animation, getting timelines. But
mostly animation is just an emergent property of the graph. And you can still... It's really fun.
If you... We can, like, we have animations of balls juggling around. And you can, while the animation
is running still modify
like the balls you can move them around
and do you have the
is any of the demos in animation
none of them are
none of the demo artwork at this very moment
is animated
that is something that I think we intend
to change very soon
I just wanted to show something on screen if I could
but that's a good
yeah
I can send you some files at some point
but yeah awesome
fun demos
but there are definitely some really cool
animations you can make at the moment.
It's basically, it gets into the territory
at this very moment of what's supported,
where it's basically creative coding style animations.
You'd otherwise have to go write some P5JS,
you know, JavaScript kind of code or processing
or whatever if you're in Java.
You know, use some frameworks to produce
creative coding style animations, which, you know,
there's no other way to do it really.
You can hand animate this stuff, but in Graphite,
you can hand animate purely using the node system.
And also, it's something you could do is you can easily go, like, copy a specific object 500 times and have every iteration of the 500 versions, like, modified slightly.
So you could have, like, a different angle and, like, a different rounding to the corners of something.
And you basically build up, like, this propeller thing that looks really 3D because you also apply different colors to every version of the 500.
And it starts looking 3D because if you have enough of them, they start not looking like discrete objects, but they look like more of a continuous...
3D kind of depth shadowing kind of shape, and you end up with having all of that,
yeah, actually looking like a 3D, like an actually looking, yeah, a 3D object that is pulsing
and doing interesting things that would not otherwise be possible in anything except creative
coding where you have to write your own code.
All that sounds really impressive.
Your camera is like out of focus, by the way.
I just noticed that
the camera's out of focus
okay now we're good
cool
okay thanks
yeah
no all of that sounds like
really really
really cool
and
yeah I
there's no other software like that
that I'm aware of
it's actually quite novel
yeah
there's a lot to
I guess it's like a lot to take in here
right
like this is
I've had
I have had developers on
who are very passionate about what they do
and I've had people that have like big visions for their project
but I don't think I've I've spoken to anyone
that is this
committed to the idea of
basically revolutionizing
open source 2D graphics
Revolitionizing 2D graphics just in general.
Yeah.
Basically, our strategy here is that we know that we can't compete with 35 years of creative software that's been on the market this entire time and is totally entrenched in every piece of industry.
We know we can't compete just by trying to catch up to them by implementing as many features as they have.
That's been already tried by other open source projects for the past 25 or 30 years as well.
Like they've, you know, they started almost in the same era.
But...
you just can't keep up with a big company that has already maintained so much market share
so the only way to actually compete is by trying to flank them by going around and you know
going around them and getting in front of them by taking a different strategy and having features
that they do not have so you're not trying to compete with the same features by simply trying
to have feature parity in every possible way out of there are thousands and thousands of
features you want to make something that is even bigger but simultaneously more powerful
And you can maybe make something that is commonly used between both pieces of software.
So industry might be using existing commercial open source standards.
I'm sorry, commercial standards at commercial industry standards.
Thank you.
And at the same time, using graphite alongside it for the things that they could not do
if they wanted to do interactive, like, templated style systems where they have, you know,
every player on a sports team out of the thousands of different players across an entire bracket for a season,
having their picture and they're, you know, holding a basketball or in one frame and then
they're, I don't know, smiling in another frame with a headshot. And then they've got to export that
same image for like 500 different member stations that need different resolutions based upon
what their broadcast standards are. They got to have like 480P for standard definition or they
got to have, you know, interlaced and non-interlaced file format outputs. I actually read on Reddit,
someone who had a job for a few years, this would have been some time ago, his job was literally
to go in Photoshop and click on menus to export like 60 different variants of a template
for each individual player at different resolutions, different export formats. He had to go like
modify it each time, export it, and his entire job was just doing that clicking buttons.
And that's the kind of thing that you can just so easily build a pipeline in Graphite.
You just have every one of these different formats as an export.
and then you just hit your export.
And that was why I was mentioning, like, a render farm.
In that particular case, if you really have, you know, like 10,000 variations that you export,
all the permutations, you could set up a pipeline and get that to run on a server
and then get that to automatically ingest into your media server if you're, you know, a broadcast company.
So that's the kind of use case that we see in industry where traditional tools,
in Photoshop, like this person on Reddit, I was reading his story, you otherwise have to do it manually.
And, yeah.
And we're going big back a bit further.
We were talking about, like, we can't just catch up.
And also one of the key strategies is that we are community project.
So what we're trying to do is that we basically built the environment.
We built the node graph.
We built the execution engine to make it very easy to add features.
So if you want to add a new note, so one of the nodes, for example,
that we will add soon is vectorization.
So taking a Rast image and turning it into vector shapes.
And that is just, it's a single file that you add or can add it to an existing file.
It's just basically one function, the vectorization node function.
And then it shows up in graphite.
And the idea is we build the tools and then we can have, well, the community can help.
And if they need a feature, they can just build a node.
It's relatively straightforward and the devX is actually pretty good, I'd say.
And that can help us catch up and we don't have to build everything, we can list help from the community to build things together.
We're building the engine and then the community can build every individual feature as a node or as a tool that operates on the nodes.
Those are the two components.
There's the nodes themselves, those are the graphics operations and there's the tools that are basically the artist-friendly abstractions where you don't have to go manually write the visual code.
You are instead just drawing or using a menu to operate.
operate on something. You, you know, you apply the Boolean operation menu button or, you know, we have a button or we also have it under the edit menu. Both of those are just operations that you would be expecting to find in other software just using their menus. We want to put them in similar places. People could find them using similar means. And it sets up the nodes for you. Or it might even set up like a complex set of nodes. You might have like five different nodes get put together for you to do a specific common operation. And that way you don't have to know how to do that.
yeah
okay
I had something
I was going to say
and I don't remember what it was now
yeah
when was it
it was like three minutes
was it during
what Keaton said
or what I started
I do not remember
It's going to come back to me probably after we finish recording, probably.
That's fine.
We'll leave that behind for now.
Maybe come back to it if I remember it.
I do want to talk about desktop binaries.
So when we're talking about the...
I'm going to take a bathroom break and actually have our contributor involved specifically in our desktop back as our maintainer.
Do you want to appear on camera or not?
I'm not prepared really...
So we're going to talk about the desktop binary.
You can at least listen in.
Yeah, okay.
And if you want, you can have one or not.
Probably next time.
All right.
So we could also have you off-screen if you prefer and just want to speak.
Yeah, okay.
Yeah, right.
Okay.
We'll just have me with a disembodied voice to the left of me.
Oh, no, they can join on camera, I guess.
All right.
So now the Germans have taken over.
Okay, so the Germans have taken off, I guess.
This is Timon.
Yes, do you want to briefly introduce yourself, and then we can talk about the packages?
Yeah, I'm...
Just a bit more with that, I guess.
Near, yeah, okay.
Yeah, I'm Timon.
I'm also from Germany, and I were mainly working on the desktop app for, like, three months now, I guess.
And, yeah, I'm a NixOS user, and...
also been building the NixOS package on this side and yeah.
You can maybe tell us a bit about your background and how you got introduced to the project or joined the project.
Yeah, I was using graphite for about one and a half years very heavily for like all my art stuff I do on this side.
And I was annoyed that there is only a web app.
web app and then i joined the discord and asked around how's this state on the desktop app and then
we joined a call sometime and i found out that the desktop app is kind of not ready at all yeah so
there were yeah there were some very challenging architecture things so what we want is that we
basically do basically run most of the code natively
And that means that we then have a, like we render the node graph.
And we need to composite the render output together with our UI,
because the UI is still in web browser and, well, rendered as web UI.
And there were no tools of frameworks available to stitch those together.
We did at some point have a demo where we could pop out the viewport into a separate window.
And then we have the main window and the separate window with the viewport.
But that's obviously not ideal.
So when he joined and was asking about the desktop app,
that's also why we weren't prioritizing it and working on it,
because there were just technical hurdles.
But around the time, like, or shortly after,
we did some new experimentation using CF,
so the chromium embedded framework,
which gives us a bit more low-level control.
Essentially, what we're doing is that we render the browser UI to a texture and then do our own manual compositing in a graphics pipeline.
Like it's a VGSL shader that overlays three textures and displays the final result.
So the artwork and viewport, like the whole viewport is one texture, then overlays and then overlays,
and then the UI and that means we have the viewport with full native performance
and only the UI goes through the CF like latency stuff and
yeah and also one of the like one of the technical challenges we ever came is that we render the
texture off screen, like the UI texture off screen,
and keep it as a GPU texture.
So on Wayland, we use a DMA buff protocol
to get the texture, like it stays on the remains of the GPU.
We can copy it over, insert it into our program,
do the compositing.
And that allows us to get pretty great performance
in the desktop app.
We're still working on improving that.
But yeah, we're making and great progress.
Actually, actually the code that was written
to do this keeping the
like the UI texture on the GPU
that will probably be upstreamed
into the CFRS crate very, very soon
because they basically were able
to use our abstraction and
build it into the API and
because that's also like OBS does it use
a similar approach so OBS also uses CEF
it does yes and for some of their
web view like web content rendering they also use web docs yeah they don't work on waylon right now
on exactly and they work on windows they don't work on wayland i um looked at the code and
well there's no no real technical reason why it doesn't work on wayland the work just hasn't
been done yet so we i basically did the work and it works so that's good so potentially
that's also something that OBS could do.
Okay.
But yeah.
Doing the desktop app,
even though we do as minimal native desktop contact as possible for the UI currently,
it's still just a bunch of work.
Right, right.
Because there's so many, there's a huge difference
between getting like 80% working and getting the rest.
Oh, uh, camera just turned off.
Yes, I...
Oh.
Is the camera overheating?
Possibly.
Ouch!
It should be plugged in...
Did I forget to plug in the...
The power course, maybe?
Uh huh?
Professionals at work.
I love it.
In the meantime, we can...
Momentarily...
Yeah.
...switch the camera to...
That looks amazing.
Can you fix it?
We'll switch to the webcam for now.
By the way, up there, you can see our main CI server running.
It's just...
I love it.
Yeah, it's a great setup.
But yeah.
So going all the way with windowing and it's really difficult,
and you can tell us a bit.
you can tell us a bit more about what it took to get it working with client site and server side
decorations and the windows it's um yeah yeah um so i think i start with the wayland uh side
uh basically wayland doesn't allow you to um like hit test where the mouse is uh when it's outside of your window
so outside the pixels you draw yourself um so we basically need if you want to have
resize borders that are outside of your uh actually ui you need to draw a bigger window
and then hit test on invisible pixels that are not shown in the compositor um that's mainly for uh like
kde gnome and not really a thing for uh
Tiling window managers, but like on the desktop environments.
And that was basically a use case where we have like the use case of having a custom title bar,
but still using a resize border and shadow that is supplied through a upstream, like a library
that W in it, the crate we use for window creation is using, that's using
like a similarly similar style to advator and i actually needed to upstream a a feature flag to support
the use case of still using these client side decorations for shadow and resize borders and for the
click testing to work but disable the custom or the advator style title
bar and use our own and that was not that painful on on Wayland and X-11 but it's
extremely painful for Windows because on Windows there's the same thing you
can't hit test outside of the window so you would need to draw your own
shadows and do hit testing on that area but getting
shadows to display the exact same way that Windows is normally doing that, so it looks kind
of like a native window on Windows.
It's very difficult, so we opted to use the, like still use shadows and use a kind of
hack to draw over the parts that Windows.
those supplies, but that disables the resize borders.
Shadows still work, but the resize borders doesn't work.
So I actually need to, like we open a second window, an invisible window over our own main window
that is slightly bigger just for hit testing and thinking that every, like every time we resize
or move the window, we move that like invisible window with it.
And then we have resizing and shadows that way.
And then we also have weird edge cases where when you maximize a window and windows, Windows actually cuts off part of the of your window, depending on what exact window version you are on.
So they cut off on the top 20 pixels on Windows 10.
30 pixels on Windows 11, and you basically need to offset the frame where you draw your own pixels, so that the part they cut off is just invisible.
And it's very weird, but all Windows, like Microsoft apps, do it the same way, so it's kind of a hack that got into, like made into a feature.
And even today it's done that way in the official docks.
So maybe they support it now.
Yeah, actually when you try any electron app on Windows,
even via code like a Microsoft product,
they do resizing wrong and the resize border is inside of the UI
and not outside.
So we are actually doing better resizing than,
a Microsoft product like this code yeah it was so annoying so annoying yeah the main
and I'm gonna I'm gonna start working on the macOS version very soon that was not
really tested until now just because we didn't have access to a Mac device and
I expect that to be and very annoying
as well.
Yeah.
So the main thing I wanted to ask you about,
obviously, like, Windows and
Mac OS, they have
their formats. It's a pretty
set format. I don't know if you guys
eventually want to be on
the Microsoft Store or whatever.
But when it comes to Linux,
what is the intention of actually
getting the application
available? Is it going to be
getting it into distrue package repros?
I know you've talked about doing a Nix-West
package. There was talk of doing
AUR package.
as well? Is it just the intention to do distro packages or is the interest in doing like
app images or flat packs? How? I want to, I want to, basically, I want to have an
package on Flat Hub and an app image that people can download from GitHub releases and
And hopefully also a just a TAR file with like all dependencies included, like CF is our main dependency.
For other distro packages, probably not done by us, at least until like people get on board that want to maintain those things because packaging is...
a lot of work right right um actually probably people are watching that would that might be
interested in doing that uh someday um so probably um support um a lot of ways of installing graphite
in the future but um that will need to yeah nice uh we'll definitely need more people
on board that can do packaging for that to work.
Yeah.
But I will work on the Nix package because I use NixS and we'll probably have a UR package at a similar point in time.
Yeah, probably.
I think AUR packages should be relatively easy and we don't need to go through as many processes to get that approved.
Another thing we are looking into is potentially distributing through, for example, Steam to get versioning or, like, on Windows, potentially the Microsoft App Store, because we do need to or want to include some form of auto-updating, and ideally we don't want to build that ourselves, if we can avoid it.
Okay.
You've gone out of focus again, by the way.
Yes.
Yeah, I'm out of focus.
Could you try?
Yeah, we have keeping our camera technician.
He's trying to fix the issue as soon as possible.
Yeah.
Yeah, so when is the hope to have a desktop app?
I think ideally by the end of this year.
Yeah, yeah, it's good.
Yeah, so ideally by the end of the year slash...
If we, oh, it's still a manual focus because I, well, it's not manual focus.
Ah, right.
So ideally by the, sorry, so probably by the end of the year and when we do our next big launch,
we will probably do some sort of beta testing before then to get some wallet, well,
alpha testing or the desktop app, just to get some more,
eyes on it test that it actually works on different platforms get macro support working but yeah currently it actually works best on Linux which is kind of nice and that is due because like we both develop on Linux but that's yeah great more desktop and the Linux ecosystem
okay um and NixOS is currently the only OS where we have a package basically
ready. Yeah, I know there are some fedora packages that I watch this. I know there's some
arch people to watch this. So obviously early on right now, but sometimes, so you guys aren't
opposed to having distros having their own packages for. I know some projects are like, we want to do
our own packaging. We don't want to have distro packaging. Like, we're not going to, we're not
going to support that?
I mean, in the end, we, like, it's an open source project, so people can do what they want.
But we do prefer if they get in touch with us, so we can, like, make sure everything works,
just keep in sync, but we are definitely not hostile towards distributing graphite.
Okay.
Yeah, we want graphite to be available in way, which is most convenient for the users.
Okay, I think that's pretty much.
all I want to ask about the desktop application.
Yeah, thank you for that.
Okay.
And Keegan does not know what we were talking about.
Because we only have like two pairs of headphones.
Oh.
Yeah.
That said, we do want to be sort of the authoritative source so people know where to get
the official distributions so we can make sure that they're always meeting the same
bars for quality that we intend.
So we'd rather not fragment the ecosystem, make sure that we are able to provide high quality to everyone, and they know where to get it.
So if we can support more distros or more packaging formats, it's effort we have to maintain, but we do intend to maintain as many distributions as possible over time.
So we'd rather people help us do that instead of doing it on their own without getting in touch.
Okay.
So what sort of attention has the project really garnered so far?
I know obviously the GitHub has 22,000 stars.
Again, I don't know how it managed to get there
without anyone telling me about the project, but...
Yeah, I would say right now it is accurate to describe graphite
is a project that has largely flown under the radar of almost everyone.
And that's a blessing and a curse.
It actually has allowed us to kind of just let things cook,
and that is in many ways good because we have really ambitious goals.
And if we were sitting here telling you the intro, you know, the first half hour of this podcast three years ago, it would be a lot of hot air.
But we've built a sizable portion.
You know, it's only 1% of the true Grand Vision, but we've got another 99% to go.
But hopefully we'll have a bigger team to get there faster.
But it really is helpful that we've been able to just sit there and kind of cook without people getting too angry at us that the software's been crap this entire time.
Like only the past year has it actually become at a performance level, viable?
to use the editor for more than just testing.
But now it's actually, like, the performance is decent.
Some people have even said it better than Inkscape.
Some people have said it's a little worse than Inkscape.
It's, you know, around the same level-ish,
and it will continue to get a lot better
because we have barely even touched with low-hanging fruit.
There's a lot more low-hanging fruit to go,
which means that it's even easier to continue making further improvements.
And we are really building the architecture and the core technology here
with the specific goal of allowing you to work with
terabytes of data someday.
You know, it might take an hour per frame,
to render something of that size, but the point is to handle that kind of scale someday.
That is the performance goals that we have in mind.
So it should be the highest performance graphics editor available someday.
That's what we're building towards.
Is there anything else you guys wanted to touch on, or is that...
We've sort of touched on a lot of stuff, but anything we didn't get to, anything we kind of missed.
I mean, I think that's probably been a pretty good overview.
We didn't talk quite about the compiler yet, so the ability to basically create a piece of artwork as a program
and then compile it to an executable, like a standalone, completely standalone executable.
That can either, for example, run as like the equivalent of a full-screen game
where you double-click on, if it's on Windows, you double-click on an EXE file, or on Linux,
you open it however you open an executable that would launch full screen and it would like run full
screen probably forever unless maybe some way to execute sorry to to terminate and it could for
example be a game or it could be a live visualization of music or it could be maybe you have it
running on some kind of interactive exhibit with a projector screen and a camera that's receiving
your video feed and like noticing where a person's walking by and creating
like butterflies flying onto their shoulders or something with a projector projecting onto them.
So it could run full screen as a standalone program, just like in, you know, Unity or Godot or
any, you know, any game engine compile a standalone program. It's not using the editor anymore.
It allows you to do the same thing where you are creating, in this case, it's basically a sequence
of Rust functions because we build it in Rust, so all the actual operations, the graphics
operations are running in Rust. They get compiled together into a sequence, and they can be
become literally a Rust program. So Graphite can transpile your graphene program. So Graphene
is the language that describes your sequence of operations, your sequence of nodes, into
literally a Rust source code file. And then the Rust compiler can then compile that onto any
platform as an executable. And it runs completely standalone without any connection to the original
editor that it was built from. So that's a really cool use case that you can then have it. You know,
You might compile a birthday card generator because you might run like some website that generates birthday cards for people.
And you can choose a template and choose a name, upload a photo, upload a birthday age, and make like greeting cards for people.
And you run that on a server or compile it to WebAssembly and it runs for your visitor's client side.
And disclaimer, this is not something that is currently built.
We do have the infrastructure in place and like we prototyped it a little bit.
And yeah, we did something like that for, like, I've built the basic flow and it's like the basic feature set.
But there is more work that needs to be put towards it because we at that point also need to have a runtime.
Because again, the graph is functional and if you want this to be useful, you need to have a runtime, to accept inputs, etc.
That's all of those are features we need to build, which are not, like, those don't necessarily.
interact with what we do in the editor so it's extra things we need to build but in general the idea is to get as close to well so and this is also part of the bigger scheme so as you mentioned functions like notes we have two kinds of nodes we have sort of general like abstractor like we have notes which can be made of smaller notes so you can encapsulate network like
node networks into node networks, display them as groups basically in the Blender terminology.
That's similar to in programming how you would make a function that calls other functions
as a way of abstracting features. So in the seven segment display, example I mentioned earlier,
you could make every segment just be its own node, which receives a number and then displays
the number. So you could abstract the functionality. Similarly, we can, like this is a
how nodes are built.
Nodes are either consists of other nodes,
like a node network, or they are basically just rust source code,
or just a rust function.
And when you usually use the editor,
we run an interpreter.
So the graphite code is, we have these pre-compiled atomic functions,
all these, the proto nodes.
They are very primitive, or at least,
relatively primitive and we then link them together at runtime to form one function we can call
which is sort of the document you can run and what we can do instead is that we instead of linking
pre-compiled notes is that we basically copy paste together the source code just call the
notes in a sequence and then allow the russ compiler to compile it and one of the great
avenues this allows us to potentially delve into in the future so that we can do jitting we can
like if you have one branch of your node graph which you rarely touch but which is performance
protocol we can ask ross c to compile this section of the graph into a node and because it's
purely functional we know it's the same behavior we can just do that and replace the function
with this node and with this pre-compiled assembly and that is something
we can relatively easily do on the desktop app,
but it's going to be a lot harder to do on desktop.
On web?
Yes, sorry.
But it's going to be harder to do on web
because the Wesm target group is like the Wesm working group
is just not far enough along.
And yeah, it's going to take more time
until we can do that in the web.
But on desktop, we could do it right now, essentially.
And basically that allows you to optimize certain parts of your execution to be even faster
because you get to utilize compiler optimizations, LLVM optimizations, and get that to run even
faster getting down to like bare metal kind of performance for some of your operations.
And then ever, if you go like modify some of those pieces, then we just fall back to using
the individual like sequence of individual operations instead of the pre-compiled version until
we wait until there's enough time to go by where we have time to run in the background,
a compiler that will recompile that area once you stop touching it.
and that's also something that could be a future additional part of our business model
that basically anyone who wants someday to get an even faster way to run their artwork
if they choose to they could basically allow it to like stream the changes that they're making to
their artwork to a server that we run and make pre-compiled pieces of code to substitute in those blocks
of execution and send those back to you send you like compiled substitutions and that
that could speed up your workflow, make it run faster.
And just another way for people who want to,
if you don't want to be running that computation locally,
maybe you're on a Chromebook,
maybe you're on the web version,
maybe you're on an iPad, whatever that is,
you could ultimately just have things run a little bit faster.
And then not only the replacements,
but also rendering, we could pre-render certain areas.
Maybe you haven't scrolled down yet,
but we predict that there's a reasonable chance
you might scroll down or zoom out or play the animation.
We could pre-render specific frames of your animation
pre-render an area beneath where you're going to scroll to. And if you choose to support us,
basically, then we can provide you compute resources to pre-render that stuff, send it to you over
the internet, and let that run. Or also, we have the idea to, let's say you have a gaming
computer in your basement, but right now you're just working on your iPad or something. You could
run it in your own local network, run like a headless version of graphite, or maybe just keep
graphite open in the background. And because it's on your same network, you can use your own
local compute resources to run that same kind of computation.
Or a CIS server on top of the wardrobe.
Yeah.
Right back there.
And the, so the way the graphic language is designed will be,
it should allow us to basically get zero cost abstractions.
So theoretically, like we're not there yet,
but we don't have, when we have not painted ourselves into a corner,
I was very, made sure of that, and much to Keynesus May, but we can still, like, we can generate code that is going to be compiled into something that's very similar to what you would write if you just wrote it yourself.
And basically becomes nearly optimal because of, like, how we build the language and design choices, which make a lot of things harder from time to time.
But we do always make sure that we don't limit our performance seating, essentially.
That we can always eliminate all the overheads, even if that means we have to think a lot harder about how we do things.
And there's one other thing we haven't talked about, which is also a planned feature.
We can just quickly mention it is collaborative editing.
That is also something we want to support.
So we do have some design ideas for how we can do the contract resolution,
but essentially like live editing together, like multiplayer editing.
Like Google Docs or Figma style.
Right, right.
Yeah.
And also potentially like if two artists work on the same document, make different changes,
how we, that's a question of how we can merge them together ideally without users having to
resolve merge conflicts.
Especially if two of them happen at different times
simultaneously without a network connection between them.
So it's not live, it's offline.
You can combine those together with, like,
basically we're looking into CRDTs as the approach to that.
And there, like, and if you ask,
there's a list of things and potential features we could talk about.
As we mentioned in the beginning, we are very ambitious as a project.
but yeah I think we at some point have to stop because the video is going to be very long otherwise
no I am I am really excited to see where this project goes
both of you I know you said that Kivon's kind of the visionary here but both you are
clearly like very excited about this project and have a lot of ideas for what you can do
with it. I really hope that
five years from now, I'm not making a
WordPress-style video about this project.
So,
hopefully you guys keep doing really cool stuff
going forward. And
this really can
be, like, you know,
the blender of 2D graphics,
the OBS, the Godot,
you know, any, any of
these tools, which are
industry standard tools, even if
like you were saying, this doesn't
replace Photoshop, but becomes
something in that pipeline.
You're not trying to do everything Photoshop is
doing. You're trying to fit within the pipeline
and provide
a reason for it
to exist.
Exactly. Yeah.
And someday we can even
perhaps add support for opening Photoshop files,
modifying Photoshop files. So we could
allow you to take your Photoshop document,
do templating, like changing specific
parts of your file with different text
or things. So maybe you had a different
artist creating some art
work. You could have our pipeline do the templating part for you. That's just another possibility
that could be kind of interesting to look into. Yeah, when you make such a generalized engine,
you know, the game engine of graphics, it just opens up all possibilities. And of course,
we're not going to write everything. We're going to have an asset store, which will be kind of an
ecosystem where a lot of the nodes, we will eventually have all of our nodes actually distributed
through the asset store. So we update graphite independently from how we update the individual
nodes and then you can have like versioning for the nodes and everything like that so the nodes
that we write and provide out of the box are going to be kind of a different part of the experience
compared to the engine and the editor and that way it lives alongside and also we have no we have no
special capabilities that other authors don't have so we're not making anything special in fact
as we continue to build more and more of graphite out of its own tooling we're going to start
basically having it so we can have it so you'll be able to like write more and more of its own
UI eventually replacing the CEF based UI with its own UI and then we'll have it so we can start
replacing our own tools like the tools that allow you to operate on the graph with kind of like an
API provided through its own graph system and that way all of the tooling and all of the all of the
custom UI additions everything like that is distributed as nodes through our node marketplace slash
package manager and make it so the extensibility of the program because it's written itself
in its own engine makes it so it kind of filled up yeah basically opens up all possibilities so that
we aren't doing anything special as authors of the application because other authors can do the
same things that we're doing for extensibility maybe they want to turn it into a cat application
maybe they want to turn it into a digital audio workstation they could create a suite of nodes
and a suite of tools that are published as nodes
and put those into a big family
and distribute that as a package
and that gives like whole new entire workflows
or entire capabilities to the software
that could be distributed commercially
by someone who wants to specifically sell some package
for a 3D CAD or for you know whatever like that
because the engine is just general enough
to support anything like that
or it's you know some open source other like entire other project
that might be just focused on building
this particular workflow for digital audio workstations
or something like that
okay so if people want to support the project or they want to get involved
where can they go so where can yeah definitely go to the website
at the moment it's graphlight.r.sr s for rust that's the file format for rust
but yeah we're looking into maybe changing the domain someday so if that'll always be
you can always go to that domain and it'll bring you there but we're hoping someday to get
graphite.org but it's owned by another organization
that may or may not eventually be willing to give it to us or sell it to us or something.
But we'd love your money to make that kind of thing and other things possible as well.
So, you know, we really are trying to make this the next blender, the next Godot, the next Firefox,
you know, all the really successful open source projects, basically.
We want to make it live amongst those as really competent alternatives that make open source,
not just the viable fallback, but the viable go-to.
And if you want to like learn about new features, like you can look at the demo artworks.
That's usually good place to start to look at like try to understand how they are built.
We do have blog posts which are always like good to read.
We had an update video about new graphite features we integrated in the last months.
And if you want to get involved, you can always join the Discord and just hit us up.
I'm Etraudor at Keeven.
And yeah, we'd love you to get involved.
If you have a feature you want to work on,
if you have a note that you think would fit well into graphite,
just let us know, make a PR.
And especially for, you know, existing, like senior level engineers,
people who have existing programming experience
and are good collaborative working with the team.
We have specific need for people to take over new systems
that we have thus far.
not really focused on because we just don't have the time to focus on specific systems,
like the user experience of editing the node graph so that all the nodes just magically do
what you'd expect them to do, like update their positions and update the way that you drag
wires around and render thumbnails and all that kind of stuff.
So that's a system we'd love to have a new owner for.
We would like to have someone take over all the graphics algorithms that can be used to modify
geometry, like computational geometry kind of algorithms so you can apply like a warping effect
to different graphics.
Anyone who's really a hardcore graphics engineer
to work on different photoprocessing algorithms
or do research into exploring
how existing techniques are done
for image operations.
There's quite also
the entire language side of things.
We didn't touch too much on that,
but it's essentially graphene
is its own programming language
that can transpile into rust,
and we want someone to work on type system theory
and get really advanced with that
and how we can start tracking
how perhaps different values can change in their possible ranges so we can track like what's the
minimum and maximum possible value in this range like basically a lot of type system theory that gets
really advanced we need owners for that who can handle those kinds of problems for us all the tooling
all the different rendering parts there's basically no limit to the number of people who can
take ownership of systems and really get involved with the team we were having team in here
he joined like three months ago and he's already become a core team member because he was just
quite good at what he does and manage to to really take over a whole new ownership of a new system, which is building a desktop app.
So it really is something we make really viable for people to get involved. We make it quite easy to
follow our documentation, get working on things. And especially if you have existing experience as sort of a mid-level or senior-level engineer,
you can really quickly integrate with our team and become a core team member within a few months as we've had with team.
Okay.
Yeah, is there anything else you would like to direct people to?
Any other links to mention?
Mostly just the Discord join the community.
We would love to have more users helping other users.
I would say at the moment our Discord has been kind of developer focused.
We've used it, of course, for development,
but we would like to have more actual users using it on a regular basis
and helping other users answer questions, show how nodes work,
be a learning resource for one another.
and kind of hopefully have the channels involved in general user discussion
be far more, far more active than just the development channels.
And that way we can continue to grow the project as a result
and have a more, just a more accessible community
that we as developers don't have to take time to answer questions for users
because we have enough users doing that for us.
And as a last thing, create art and share it.
like show to us we love seeing what you guys create with graphite and you can also share it with others
and let them know that graphite exists yeah because like brodie didn't know so do your part
yeah share it with your other um you know other youtube channels that you follow that kind of thing
make it more basically make it so that we don't fall under the radar of most people these days
because you know we really have been flying under the radar i would say and it's good and it's bad
but um ultimately there will be a point at which people really really
really start being aware of it and I think we're probably about ready now to make that
that yeah as we're about to transition from alpha into beta I think our goal is in the next few
months to switch over from alpha to beta which basically means we're no longer experimental
so much as we are simply early on but moving towards updates exactly yeah it's a Ross project
way delayed yeah yeah yeah please do take take a look at the web
website, take a look at the different social media channels that we run, and definitely get
involved in the community, either as a user or as a contributor or anything in between.
If you are not a programmer and you are the kind of person who wants to start a YouTube
channel and you want to make a video on every single possible feature that Graphite has.
Yes.
That would be awesome.
Graphite has.
Make the tutorial channel.
Yes.
Yes.
Yes.
Yes.
And it's limited times when it, when our project has ever made it onto YouTube,
there's been three or four videos every single one of those has very largely overperformed in the
youtube algorithm so there is actually quite a lot of room to um yeah to be seen and we don't have
the time to do that ourselves so that would be great ideally ideally in coordination with us so we
can let them know what things are but yeah that generally coverage helps yeah and if you want
to do that that would be great or if you're a video producer and you want to actually get involved
with our team or you do kind of some sort of like social media things
or content management, that kind of stuff,
and actually get involved taking over from what takes a lot of time
from me out of design and out of programming and out of management
is I got to spend a lot of time writing blog posts
or writing videos and editing videos, that kind of thing,
because either we have nothing, which has been the problem thus far,
or occasionally, like as of recently,
be sure to watch our September update video.
It goes into a lot of detail.
Be sure to watch that.
It's actually really good.
But yeah, that kind of thing.
It took a week of my time,
and I could have also made a mid-size feature in that same amount of time.
So if we could have someone joining our team to help produce videos,
that would be a really, really awesome way to help if you're experienced.
I should be clear that we're not really looking for someone who's just barely beginning to get into this kind of thing.
We're looking for existing experienced professionals in that kind of area.
But it's a great way to get involved and help the open source creative ecosystem really thrive and grow.
Awesome.
I am very happy we did this.
this was a really fun episode.
I am, as I said, I'm very excited for where this project goes.
I wish you both and everybody else on the team, the best of luck.
You're, as I said, very clearly, very inspired here
and have a lot of big ideas, and I hope they go well.
And I am, I'm excited for the stable document format,
and I'm excited for the desktop applications.
Those two things are really big for me,
and I could legitimately see myself in the current state it's in,
just swapping to it for my main production stuff.
Yeah, definitely for graphic design,
and I think in the next year,
especially that's when we're going to begin with Raster,
you know, actual photo editing, something that will eventually be
of the style of workflows you might use for Photoshop these days.
I guess last disclaimer, Rast editing has been a bit neglected,
but we are working on fixing that and making it more.
There's a bunch of technology that has to be built so that it can run with holding, you know, the pixels require a lot of data and that data needs to live in the correct location, whether that's CPU memory or GPU memory, and getting that all to be managed correctly and run in the right, on the right compute devices at the right time and in the right state, that is all complex and it's been a big part of the graphene development, but that's heading forward. We're getting close to that being no longer a blocker, and we do intend to then go full speed ahead next year on just Raster as being kind of one of the major focus.
as well as animation that's going to be another big focus we'll eventually have character
animation with like skeleton deformation and all that kind of stuff we don't have we can do another
one of these at some point in the future we don't have to keep that i i would love to have you guys
back on you know a year from now see where the project's at and yeah if you want to if you want
to do this as like a every so often like i'd be more than happy to do so i'm very excited about this
project yeah looking back a year and they're looking back a year and they're looking back a
previous, like I was physically here two years ago, and so much has changed in these two years,
like a year ago, and then the year prior, it was like exponentially going from just like a toy
that seemed kind of quaint and interesting and unique to now. Like, maybe if you ran across
this a couple years ago and you tried it out briefly, like, oh, I can't do anything without
it just crashing or the performance being terrible. Try it again now, because it's actually
in quite a good state, I would say, for its intended use case of vector editing. As long as you're
importing big images or yeah on desktop that works it works yeah we can import big images and it
almost is good performance these days yeah but um on on desktop everything on desktop in
linux and that's great but yeah definitely um and we'll let you know once we have the desktop app
out and you can try it yeah yeah we're also hoping to get to um a recurring donations of a thousand
a year, sorry, $1,000 a month by the end of this year. So in the next two months, that
will be our goal to really keep pushing the growth targets there higher. So I'm not just losing
money every day would be nice. So anyone watching this, who really wants to have the open source
ecosystem move forward with its graphics and not just be an alternative, but, you know,
a proper destination. That would be a great way to help make that happen. So thank you.
Awesome. All right. Just one last time. Nothing else. No,
other links you want to mention. We talk about the Discord,
talk about donation. Graphite. Dotter as slash donate.
Awesome. Okay.
My main channel is Brodie Robertson.
I do Linux videos there six each day a week.
I haven't got a graphite video in the pipeline yet,
but I should add it into the list.
Because it...
Yeah, I was playing around with it when you were talking about some of the features before,
and I haven't used it before,
and it kind of just made sense.
So, I guess that...
Like, I literally haven't read any of the documentation or played around with it before, and it just clicked.
So I guess, like, that's what, you've done something well there.
You, I desire of my passion.
Not as the meme answer this time.
I have the gaming channel.
I do stream there twice a week.
Right now I'm playing through Yakuza 6 and also Silk Song.
If you're watching the video version, this you find the audio version on basically every podcast platform.
It is tech over T.
if you are wanting to find the video
it is on YouTube
tech of a tea once again
I actually don't have any tea here
I have a bottle of water
no I don't even have my cup here
anyway I'll give you guys
the final word what do you want to say
how do you want to sign us off
have fun
create art
make awesome things be creative
awesome
