Future of Coding - The Aesthetics of Programming Tools: Jack Rusher
Episode Date: July 26, 2019Ivan Reese guest hosts. I've been intimidated by Jack Rusher from the first blush. I mean, he's wearing a high-collared fur coat and black sunglasses in his Twitter pic, and his bio includes "Bell Lab...s Researcher". So when tasked with choosing a subject for my first interview, I immediately reached out to him, leaning in to my nervousness. His reply included the detail that he's "generally hostile to the form" of podcasting. Terrifying. When we talked, it was about Lisp — several flavours of Scheme and Racket, Common Lisp, Lisp machines, Black, Clojure, parens of all stripes. It was also about aesthetics, and graphic design, the relative ignorance of typical programming tools to the capability of the visual cortex, and how to better tap it. This podcast's streak of discussions about Coq, miniKanren, TLA+, and Alloy continues, with the addition of QuickCheck and the like. Jack presents his work on a literate editor for Clojure called Maria.cloud, an environment that makes a number of unusual and interesting choices both in the design and implementation, reaching for an ideal blend of features that afford both instant beginner enthusiasm and unrestricted expert use. We pay our respects to the phenomenal red carpet that video games roll out to new players, inviting them in to the model and mechanics of the game with an apparent ease and apt ability that should be the envy of programming toolsmiths like us. The show ends with Jack sharing an excellent collection of plugs, ranging from academic papers by the relatively obscure Stéphane Conversy, to the aesthetically-lush programming tools pouring out of Hundredrabbits's Devine Lu Linvega. I am no longer terrified of Jack's persona. Rather, I am now humbled by his towering expertise and the wildly varied accomplishments of his career, and it was a thrill to get to tour them in this interview. Best quote of the show: "A kind of grotesque capitulation to sameness." Damn, Jack! Links Jack Rusher is our esteemed guest. He is on Twitter, Instagram, and SoundCloud. Applied Science is his consultancy, and Maria.cloud is their beautifully designed literate Clojure editor. Ivan Reese hosts. He's on Twitter, works on educational media, is making a visual programming tool, and plays 100 instruments — badly. He started life with HyperCard and now loves Max/MSP. Repl.it is our Sponsor. Email jobs@repl.it if you'd like to work on the future of coding. Complex Event Processing is a bit of technology Jack helped commercialize. ClojureVerse is where a discussion of Luna led to the Visual Programming Codex, based on the History of Lisp Parens by Shaun Lebron. QuickCheck, miniKanren, Datalog, Black Scheme, and Oleg Kiselyov are touched on. Out of the Tar Pit has its mandatory mention, and then Chez Scheme saves the day. I wanted to link to the Maru project but the author, Ian Piumata's website seems to be down and I could find no other canonical reference. There's some discussion on Hacker News and such. If you know of a good link, I'd love a PR. Scheme Bricks and Media Molecule's Dreams are interesting touchstones on the road to future visual programming languages. Ivan has an affinity for Pure Data and Max/MSP and vvvv. When talking about tools for beginners versus experts, Rich Hickey's Design, Composition, and Performance is invoked — and poor Shostakovich. Jack's main is Maria.cloud, named in honour of Maria Montessori. SICP gets a nod. Maria has proven useful at Clojure Bridge. Matt Hubert [Twitter] created the Cells abstraction that Maria was eventually built atop — it's similar to ObservableHQ. Video games like Steel Battalion, The Witness, and Dead Space have strong opinions about how much, or how little, visual interface to expose to the player. Complex 3D tools like Maya and 3D Studio Max are GUI inspirations for Ivan, where Jack and Matt prefer simplicity, so much so that Matt wrote When I Sit Down At My Editor, I Feel Relaxed. Dave Liepmann is the third leg of the stool in Applied Science, Jack's consultancy. Maria originally had a deployment feature like Glitch. There's a great talk about Maria by the Applied Science trio, containing a mini-talk called Maria for experts by Jack. Pharo is an inspiring modern Smalltalk. Fructure is a wildly cool new structured editor, and its designer Andrew Blinn is fantastic on Twitter. Extempore and Temporal Recursion by Andrew Sorensen offer some interesting foundations for future visual programming tools. Sonic Pi and Overtone are lovely audio tools by Sam Aaron, widely praised and deservedly so, and everyone should back Sam's Patreon. A visual perception account of programming languages: finding the natural science in the art and Unifying Textual and Visual: A Theoretical Account of the Visual Perception of Programming Languages are obscure but beautiful papers by Stéphane Conversy. Aesthetic Programming is one of Ivan's favourites, and the author Paul Fishwick just so happened to teach Jack's graphics programming class at Uni. Orca is a mind-bending textual-visual-musical hybrid programming tool by Hundredrabbits, who are Devine Lu Linvega and Rekka Bell. Notwithstanding that they live on a sailboat(!), they do an amazing job of presenting their work and everyone in our community should take stock of how they accomplish that. Ableton Push and Ableton Live are practically state-issued music tools in Berlin. (Not to mention — Ivan edited this podcast in Live, natch.) thi.ng and @thi.ng/umbrella are Jurassic-scale libraries by Karsten Schmidt, who wrote blog posts about Clojure's Reducers in TypeScript. Finally, Nextjournal are doing great work with their multi-lingual online scientific notebook environment. The transcript for this episode was sponsored by Repl.it and can be found at https://futureofcoding.org/episodes/041#full-transcriptSupport us on Patreon: https://www.patreon.com/futureofcodingSee omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Hello, and welcome to the Future of Coding. This is Steve Krause.
So, today in this episode, we are going to switch things up a bit.
I was at dinner with Aidan Knief and Dev Doshi a couple of months ago,
and they had this wacky suggestion that I bring on a guest interviewer
to take some of this podcast work off my plate and at the same time scale it up so that
there can be more conversations from different perspectives showcased on this same RSS podcast
feed. And so I thought it was a great idea. And one person immediately came to mind who would be
a wonderful guest interviewer, Ivan Reese, who has been a listener of this podcast and part of
the Future of Coding community,
I think since the beginning, at least that's how it feels to me.
I can't remember how we originally got connected, but it's always felt like he's been there as a
staunch supporter of these efforts.
He's always been a really positive and encouraging voice and someone that I can
always count on for sharp feedback and a
thoughtful perspective. I've somehow come to really trust his taste on things. It's kind of a
subtle thing. I can't put my finger on, but I find that his aesthetic is very similar to my own,
which makes him a perfect person to be a guest interviewer on this podcast. And you actually
have heard from Ivan before, you just didn't realize it.
If you've enjoyed the increased audio quality in the past couple of episodes of the podcast,
you have Ivan to thank. He's been in the background helping me with choosing the right
microphone and getting it set up correctly. And sometimes when I set it up incorrectly,
he'll help me salvage the audio quality and get the best sound
that's possible, you know, given how I screwed things up. So it's been a real labor of love on
his part. So thanks, Ivan, for that. So without any further ado, Ivan Rees.
It's going to be really tough to live up to that glowing introduction that Steve gave me, but I will do my best.
My name is Ivan Rees, and I'm really excited to be on the podcast interviewing Jack Rusher.
But before we get to the interview, I will give you a whistle-stop tour of my programming career,
just so you know what sort of a perspective I'll be bringing to the interview. When I was about five years old, I started making silly
interactive animations and games in HyperCard to annoy my sister, and I've been making silly
interactive animations and games ever since. I work for a little education media company,
and I was hired originally as a 3D animator there, though my role quickly expanded to include programming.
So I now make all of the frameworks and tools for the other artist programmers in the company.
My current project is a visual programming language for building interactive animations,
and if you follow me on Twitter, you know that I tweet insufferably about it.
Outside of programming, my main hobby is music.
A curious tidbit is I have a collection of over 100 different instruments, all of which
I play very badly.
And having a hobby as a musician and being a programmer means that I've had a lot of
fun exploring the intersection of those two things.
And so Max MSP was probably my
introduction to visual programming, and I still have a very soft spot in my heart for it.
So enough about me, let me introduce Jack Rusher, our guest on the show today. Jack is a programmer
living in Berlin that I know via the Clojure community, though his interests in Lisp go way
beyond that. Jack has a fascinating history
of projects that we talk about, and then we head fearlessly off into the weeds talking about logic
programming, model checking, towers of abstractions, history and future of visual programming languages,
the aesthetics of programming tools, the principles of visual design that underlie all of our programming
tools, whether we realize it or not, his consultancy in Berlin called Applied Science, and the
programming environment that they built a literate editor for teaching Clojure to new
programmers called Maria.
Just before we get into the interview, I have a message to bring you from our sponsor,
REPLIT. They sponsor the transcript of the show, which you can find at futureofcoding.org
slash episode slash whatever episode number this is. REPLIT, with the URL REPL.IT,
is an online REPL for over 30 different languages. It started out as a code playground,
but now scales up to a full development environment
where you can do everything from deploying web servers
to training ML models, all driven by the REPL.
They are a small startup in San Francisco,
but they reach millions of programmers, students, and teachers.
They're looking for hackers interested in the future of coding
and making software tools more accessible and enjoyable.
So email jobs at REPL.it if you're interested in the future of coding and making software tools more accessible and enjoyable. So email jobs at repl.it if you're interested in learning more. Yeah, so hi, Jack.
I'm interested in your background as a programmer, especially in your Twitter bio. You kind of tease that you had a past life as a kernel hacker at Bell Labs? Those are actually two separate entries.
So when I got started with programming,
well, I started programming in the 70s
on little microcomputers, 8-bit microcomputers.
But when I went to university,
I immediately was launched into a world of VMS and Unix,
specifically BSD Unix,
running on ridiculous refrigerator-sized machines
like a Gould PowerNode 9000, which
is a museum piece these days, as are all of the machines I was using then.
What did that entail?
Well, in the beginning, the problem was that the software we were running our machines
with was not particularly reliable, so we had to fix it.
And so a great many Unix users in the mid-80s were also, you know, fixing not only
user land utilities, but kernel things. And I went a little deeper than most and did a lot of work,
particularly in file systems, virtual memory, and the scheduler in several different flavors of
Unix, but especially BSD variants. And ultimately in my professional life, wrote a few different
very small Unix-like operating systems that were used in embedded systems.
That's such a far cry from the programming experience that I think most people have today, where they're working on top of such a gigantic stack of abstractions that actually going down to that level is something that people do more like a splunking kind of expedition rather than something that's necessary to keep the wheels turning.
Mostly hobbyists for fun these days, yeah. And my GitHub repo even has a tiny bootloader for
starting your own operating system project on x86 compatible machines that I used in a class
that I taught in the early, I think around 2010 in New York. Yeah, so in some sense you're still
kind of doing that to this day. Yeah, although my professional work at the moment is absolutely opposite end of the tower of abstractions.
Yep, and I'm hoping we'll get into that a little bit as the conversation goes on.
So that's the kernel hacking part. What's the Bell Labs part? Oh, yeah. So for a while, I was at the labs, and our team was working on a thing that
was a, it's called, the technology is called streaming databases. And this is sort of taking
the conventional idea of a database, which is that you have a big pile of data that already exists,
and you ask questions about it in the form of queries, and it delivers you answers based on
that pre-existing data. In the streaming database context, you do not yet have the data, but you do know what
your question will be. And so you make a query in advance, and then as the data comes in, you're
given something like a materialized view or perhaps a trigger on some sort of alert or whatever
based on your query. And in our use case, because it was AT&T, it had to do with data networking.
So we built a system for monitoring data traffic
at a very, very large scale, because AT&T,
that would run on commodity hardware
and would respond to SQL-like queries.
The trick here was that we used our own modified
NetBSD kernel for these machines,
and then we wrote some code that would take these queries and do a complexity
analysis and take the least complicated parts and compile those into a firmware update that we would
hot flash onto the network card so that it could do a lot of the upfront processing, and then a
kernel module that would do some things in kernel, and then a user space portion that could often be
scripted in Perl because by that time there wasn't that much data left, because we'd already pruned it in these faster layers farther down the hardware stack.
What era about were you working on that?
That would be not quite 20 years ago.
Oh, okay. Yeah.
So, you know, well in advance of the move to NoSQL databases,
where you have to do that similar sort of planning out all of your queries up front,
and then kind of baking that into the data as it comes in,
rather than the SQL style, collecting all the data and then querying it after the fact.
Yeah, this was the first time that I had dealt with that particular situation in that way,
but a few years later around 2005 or 6 I did a startup where we built a thing that
kicked off the discipline called complex event processing or rather the commercial phase of the
discipline called complex event processing which was the same sort of idea but applied to various
industrial sectors we were a company called Illyria, which was eventually purchased by Sybase, and the product is still sold by SAP today. And it ended up being used for all sorts
of things. It was a data flow system, actually. So this is a thing that has some crossover with,
I think, the general interest of the listeners of this podcast, because it had a programming
mechanism that could be done through a sort of boxes and wires kind of approach,
through a visual programming environment, or through a code approach, or through a hybrid approach,
where the boxes were written in a programming language of our own creation,
and then wired together in the visual environment.
And at one stage, this was used to handle, I think I can say all of this without violating any contracts that we signed at the time,
about a third of all foreign currency transactions in pounds sterling,
most of the anti-fraud activity for a couple of very large credit card companies
who will remain nameless, and so on and so on.
So it ended up being quite effective as a system for detecting patterns in data on arrival
and allowing very nimble reactions to those things.
And what was the name of that technology?
The product is the Eleri streaming engine.
I was more thinking the sort of the generic term,
the complex event processing.
Oh, complex event processing.
It's a terrible name.
We did not coin the name.
It comes from the research community
and it just makes it all sound much more scary than it is. And I don't imagine beyond the fact that anything to do with
flowing information from one place to another is either continuous or discrete. And when it's
discrete, we like to call it an event. And so in that sense, I think there's probably no relation
to the event streams we have in web in web programming these days um or event
sourcing rather or or is there some relation there well in the sense that if you have discrete events
arriving and you root them through some sort of uh data flow architecture then you have something
that's very like reactive programming yeah and so in our case, my two primary colleagues,
who were really, really excellent designers, programmers, and so on,
who were working on this with me,
one of them was one of the co-authors of the original
adding an object system to ML paper, and thus kicked off OCaml.
And he was the primary designer of the language
and wrote the little VM.
And so we had these little language runtimes with arena allocators that would blow away any storage that was allocated in the course
of the thing so that the user wouldn't have to worry about storage allocation and so on.
And with a great deal of hand optimization to the little VM. And they ran really, really well.
And so more expert users could write these little programs in these boxes and less,
shall we say, less expert users could then wire them together very easily. And you would have different people
doing different sorts of tasks at the customer installations. Cool. And when you were describing
the different ways that you could program the system, was I correct in understanding that there
were sort of three different levels? You could write traditional code, you could write little
pieces of traditional code and then wire it together with some sort of visual language.
And then was there also entirely a visual language or did I misunderstand?
Yeah, there was an entire visual language. In fact, that's how we started because we had believed,
or actually it was my fault, I had believed initially that that would be the primary way
that our users would like to interface with the system. In retrospect, I feel I was wrong about
that and I should have built an interface that looked more like a spreadsheet. The more
like Excel it would have looked, the more appropriate it would have been to the users
we ended up having. I just didn't know what verticals we would end up being successful in.
Right. Yeah.
But in finance, everybody loves a spreadsheet. And with age and experience in user interface
design, of which I did not have that much at the time, I've come to realize that pre-post image kind of table views would have been an excellent way to show people how to do this kind of thing.
Interesting. might be one of those people though. I'm imagining what I think it is. A pre-post image, is that
where you have one half of the view is here's the data coming in and then in the center
you have here's the transformation we're applying and then on the other side you have here's
the processed result?
Yeah, that's exactly it.
Okay, cool.
That would have been the much better way to do it.
Right, right, right, right. Yeah, it's curious how as as people who are fond
of visual programming and in all of its different forms we keep struggling to do something that is
you know like a like a order of magnitude more effective than what visi calc seemingly came up
with right out of the gate um it's kind of frustrating how they hit upon something so
seemingly fundamentally powerful very, very early on. And yet with all of our playing around with
different ways of visualizing what the computer is doing, it's hard to beat just plainly showing
data and making that data and the different intermediate forms that it takes along the way
very, very visible. That's
hard to top. Yeah, it's extremely powerful. Anything that taps into our evolutionary heritage,
I think, is very strong. And that's been known at least since people like Minsky and Papert were,
you know, trying to teach children with physical, mechanical turtles and things, is that
bringing in our intuitions, our physical intuitions, is one of the strongest ways to give us a jumpstart on learning a new domain.
Yeah, and those physical intuitions, I'm assuming you're referring to things like spatial reasoning and something like a grid.
A grid is a very knowable thing that we find in real life.
And so bringing that kind of a structure as a way of dealing with hierarchy or as a way of dealing with relationships. It leverages a lot of what we bring to the computer.
Absolutely. And if we push past that, if we look into other disciplines and what they've learned,
one discipline that's done very well is various graphic design disciplines have determined what
axes there are really, and data visualization for that matter, have determined what axes there are really, and data visualization for that matter, have determined what axes there are really on which you can vary your presentation
in order to sort of recruit the user's visual cortex to do most of the work
before their higher facilities begin to try to figure out what's happening.
And so you have things like relative contrast, relative size, grouping by position and space,
all of these different things.
And I think in
programming, actually, we make shamefully little use of most of these ways of conveying information
to our users, in this case, users being the programmers who are using our programming tools,
because the tools are themselves, as you are well aware, interfaces.
Yeah. Yeah. And like those fundamental design principles, like you pick up any, any sort of, you know, first year college design textbook and you won't be able to get very far into it before you encounter the fundamental building blocks of pretty much any cultural heritage, you move through, like you said, spacing, grouping,
variation in intensity and hue,
and what they don't even touch on,
and I think something that we grossly overlook in computing
is also that there are those same fundamental building blocks
available in motion.
Absolutely, yeah.
Yeah, entry-level design.
It's not for pencil animators,
so they're not learning about easing
and temporal repetition and anticipation
and those sorts of principles as well.
And that's just, for me personally,
looking at a lot of the programming interfaces we have,
not only are they built with a sort of willful
or accidental ignorance of design principles,
but there's also this entire other domain that can be explored and leveraged to convey meaning
to the programmer that's just not even being touched on. Absolutely. Absolutely. It is a bit
of a tragedy, really, that the editing environment continues to be. People continue to write new editors for programming languages
that offer you a grid of fixed width type.
Yeah.
It's just, that's it.
The paradigm is monospaced type
and ooh, we'll maybe add some color
for the syntax highlighting.
Yeah, no, that's I think what excites me so much
about the community of this podcast
is so many people here are are looking at different
ways of getting us from that that fixed grid of colored text to something and the something is
you know as yet undetermined and it's uh it's kind of neat to see all these pieces from you know the
design fundamentals or or motion or um or aspects of culture being put together in
different combinations to see which of them resonates most strongly with what, you know,
human beings bring to the programming experience. It feels like it's unfamiliar territory, and yet
we've been exploring this very territory since the 1960s, if not a lot earlier, depending on how you want to
frame it. So you have a remarkably sharp sense of aesthetics. That's something that I've noticed
from your profile picture on Twitter to the way that you use your hobbies in programming and elsewhere and tie them together to make
generative art to being a musician, your website, like everything that I've seen you do
that requires you making your presence known on the internet is rooted in what I feel is a very strong sense of taste and cohesive sense of aesthetics.
That's why I wanted to interview you on this show,
because I feel like that permeates everything you do.
I imagine that it does.
And so I'm curious to see when it comes to the choices that you make in programming
or your affinity for certain technologies, how do you see your own sort of
sense of aesthetics guiding those choices? Well, first I have to say thank you.
And then I would say I have multiple backgrounds. So I originally was educated in physics and
shifted to theoretical computer science because I was seduced by certain early mid-century findings in that discipline,
and then spent a long career doing engineering with software and even a bit of hardware.
So when I'm doing my own programming, there is a strong pragmatic perspective for me
where I will use whatever tool I feel is the one that is going to help me achieve the
objective well and so on. On the other hand, like everyone, I am inflicted with my own set of
aesthetics that drive me towards certain things. I really, really like scheme, for example, and the
closer a language is to scheme, generally the more I like it. This is one thing. I believe that it is always better
if you can to tell the computer what to do rather than explain to it the minutiae of how to do it.
I find that more elegant and more pleasant and also faster and more practical in most cases.
Is that faster in a performance sense or faster in a just...
In a human effort extent yeah and scheme specifically or
lisps as a family or racket or well i've had very enjoyable programming experiences in uh scheme and
racket and in common lisp and in some lisp like dialects of my own construction over the years
so yeah generally i do enjoy parentheses i would say But beyond that, of the group of them, I feel like Scheme hit upon a distilled
and kind of crystalline vision that I respect a great deal. And so when I would use CommonLisp
in the past, it was primarily because it had usually made available at some stage some kind
of very nice environment for programming.
So if you have listeners, for example, which I doubt with your community that you do have any
listeners who aren't familiar with common lips environments, they are typically extremely humane
in the sense that if you have an error, you can pause and fix the thing that broke and continue
from where the error occurred and things like this that are almost unknown in most other programming environments.
So this is very, very attractive.
But Common Lisp, the language, it's a mixed bag.
There's some very nice things about it, and it also has a lot of cruft,
and it doesn't have the kind of crystalline purity that Scheme does.
I think the first place we ran into each other was on the Closureverse forum.
That was, I think, in a thread about the Luna language slash environment that had just been announced, where they're creating a very Haskell-like pure functional language and also creating a very nice sort of node and line based visual programming environment and the two are there
two representations of the same underlying model we were looking at that environment and talking
about the various merits and projects that had tried this in the past and some people started
saying hey we should you know collect all of these projects into one into one place and um
you suggested that somebody pull on the
model of Sean LeBron's history of lisp parens which is this wonderful github
repo he made sort of reviewing some of those beautiful environments that have
been made for lisp throughout history and looking at not just the the design
of them and the choices that they've made but how they sort of I feel like they
were reflections
of the era
in which they were created in the way that
you know everything that is created is a reflection
of its time
but there was this period
through the 70s and maybe in the early
80s where it feels like
the spectrum was wider
like there were wilder ideas
being tried despite the fact that computing power was so much more limited back then that really the
the freedom to do whatever you wanted that we have now wasn't there back then and so the scope of
human ambition was curtailed by you know the very very restrictive ability of the computers at the time.
I feel like we haven't expanded the scope and breadth of our imagination for editing tools
in lockstep with the advancement of computing power since then.
And I'm wondering if you also feel like the range of exploration has, if not narrowed, then at least stayed within a kind of a known bounds.
How do you see our programming culture kind of moving forward compared to how the technology that we built that culture on top of is moving. So I think one of the things that happened historically
was that the machines that had the nicer programming environments were extremely
expensive machines that essentially existed in the future relative to what people could afford
at the time. And what I mean by that is, for instance, the Lisp machine, which I had the
pleasure to use a few times back in the day, was remarkable and contained
features that nothing else had. But at the same time, even back then, it was a bit clunky running
on, by clunky, I mean a bit slow in terms of direct response to user input, running on $80,000
worth of hardware. And so what naturally happened was cheaper, much worse workstations showed up with much worse tools and experiences available on them, but which cost such a tiny fraction of what those machines cost that it reset our expectations to what was possible on an 8088 with very little memory.
And that's essentially where we have stood since then, and we have professionalized the culture around the lower
expectations of that lesser hardware. So now when we go back in time and we look at the papers from
the 70s and early 80s in, for instance, the Lisp community, we find over and over again gems that
were really interesting ideas that were just impractical on the hardware that most people
could actually afford at the time. And as you have pointed out just now,
the hardware that you have in your phone today is far better than the hardware that one had
on an extremely expensive machine in the 70s.
So if you're writing an environment for a modern laptop,
you can do anything,
which means that we are ripe for a revisiting
and a rethinking of how we do these things
and even not only coming up with new ideas,
but even performing some anthropology on the history of our own discipline and sort of
cherry picking some of the great ideas that we had to leave behind because our hardware was so bad.
Alan Kay had this wonderful description of, to invent the computing of the future,
you can buy hardware that takes you 10 years into the future, and then you can imagine another 10 years after that. I can't remember his exact quote,
but it was basically, you can put yourself 30 years into the future and work in that space
if you are adequately well-funded and adequately imaginative. That ties that together, what you've
said about it, where these researchers coming up with all this material were almost leaving a breadcrumb trail for people today. Did you ever end up trying Luna?
I think I was unable to download it when I went to try it, and then I didn't go back. But I do
very much like what they're doing. One of the things that I think isn't talked about enough
when people talk about languages with a strong upfront type discipline like Haskell
is that everyone focuses on this
as a way to avoid making certain kinds of mistakes,
but they don't really talk so much about
how effective it is as a user interface paradigm
because the more the machine knows about what's possible,
the better sorts of completion and filtering
it can give you at moment to moment
while you're editing your code.
And this is something where I think Steve,
the normal host of this podcast
who I am swooping in beneath,
he and I really differ in this regard
in that he is very strongly of the ML family,
strongly typed persuasion,
and I am very dynamically of the
dynamically typed lispy sort of persuasion,
I don't believe that there's anything fundamental about types and category theory
that you require in order to create very rich tooling,
though I feel like it does.
There is some cohesion there.
It's very nice to have that certainty
about the mathematical underpinnings of your language
in order to build tooling around it.
But I feel sort of like there's this impression
that dynamic typing leads to a very difficult foundation
to build tools on top of,
though our history with Lisp would suggest otherwise.
Not only Lisp either. I mean, Smalltalk is tremendously dynamic and to this day has the
best developer tools that any system can really offer. So yeah, I mean, it's a ridiculous argument
that you can't make good tools on top of dynamic languages. But what I would say is that a language
that has built in a way for you to tell the environment to restrict the possibilities at a certain call site, for example,
does give you the ability then to have a user interface
that doesn't provide you completions that don't make any sense, for example.
Sure, though there's no strict requirement that that be done with types.
Something like specifications or contracts would work just as well, yeah.
Oh, absolutely, And to me, those are really fungible
in terms of a way to communicate to the machine
what range of possibilities they are for this call site.
Any way you can do that can work for this purpose.
And of course, for me, as I mentioned before,
my love of scheme,
I very much fall on the
don't make me write all the
types up front family of programming. Because for me, with the kind of very rich REPL attached to
editor kind of editing that I typically do in the dynamic languages I use, it is extremely seldom
that I encounter a bug that occurred because of a bad type signature. Those things are worked out
while I'm writing the code, while I'm testing the code, while I'm evaluating the
forms, and I very quickly have the shape of the code the way I want it to be, and then
everything is fine, and I don't typically have the kinds of problems that Haskellers
tell me that I would not have if I were using Haskell, which I have, in fact, used in anger
in the past, and I respect Haskell a great deal, but it's not my favorite moment-to-moment
programming experience. Yeah, the whole static dynamic debate, it just rages on and on and on and on. And I feel
like it's fundamental to computation in the way that, you know, like we have lambda calculus and
Turing machines and combinator calculi and pi calculus, all these different fundamental models of computation.
Is the static dynamic split something that you feel like
we're going to have forever and ever?
Or do you feel like in the culture,
we will eventually tip one way or the other and stay there forever?
Or is there some kind of harmony between them we might arrive at?
How do you feel about that split? Well, I mean, the area between the two extremes that I find
most interesting is in progressive typing. So obviously the racket community, I think,
is at the forefront of this particular approach. And I do like it, actually, because I find that
early on, and different people approach programming
different ways but for me if I'm encountering a new problem domain in the early phase I tend to want
to go very fast and try many different things to explore the environment and to explore the space
in an environment that helps me so I want to evaluate lots of little forms and capture
what they've produced and their typing will exist, obviously. If you have a vector of vectors,
that's a thing and it has a type. But I don't want to have to tell it that I want a vector of vectors
when I'm taking apart this piece of data that's coming in off the wire in a nonce fashion from
inside of Emacs just to see what kind of data I'm receiving and so on. And so there's definitely a wide base of exploratory programming and live coding that I want to do in an extremely
forgiving, extremely dynamic language. On the other hand, once something is sort of hammered down
and you're pretty sure that your need to change it is small, but the cost of error is high,
or alternatively, and this is something again that I feel is not talked about
enough with regards to types, or you need it to go very fast, then going through and providing
type hints so that your compiler can do more work and produce better code becomes more valuable.
All of that said, most of the claims for static typing in terms of avoiding errors, I find,
at least in my programming life, I have gotten more mileage out of from model checkers, theorem provers.
So, for example, if one is writing a complex protocol,
this is a situation where the types can't save you, but a model checker can.
And these are the kinds of situations in which I run into walls where,
oops, I didn't think of that, and now a very expensive mistake has happened,
where that is much less common, at least in my programming life,
as a result of not having inserted a type in somewhere.
Have you ever seen an environment that,
because I'm used to seeing tools like TLA Plus or Alloy
as sort of separate from the actual language
where the development work is being done.
Have you ever seen an environment that pulled those things together other than maybe Quick
Check or something like that?
The closest thing really is when you have your model checker embedded in the same language
you're doing your domain programming in.
So for instance, Coq for an OCaml programmer, for example, feels very native relative to
other ways that you might go about doing it and so on.
Is there a relationship between the sort of model checking that you've done and something that you might do with like MiniCanRen?
So there is obviously a strong relationship between theorem provers and logic programming environments.
And so you can, in a similar way, sort of factor out the possibility of certain kinds of errors and so on
using i mean yes obviously you can you can construct a type system in a logic programming
environment if you so choose so obviously they have equivalent expressive power my my curiosity
there is that something like um like mini canron is it's it's a tool that you'll reach for usually
when you need to do that sort of logic programming
as part of solving your domain problem.
Whereas I feel like model checking is something you reach for not to reach a solution in the domain,
but to reach a solution in the domain one abstraction away, the programming domain,
where you're trying to figure out if the system that you're constructing to solve the domain problem
has the properties that you want to imbue it with. And so I'm sort of curious to,
I'm sort of wondering aloud if there might be some kind of potential direction there for future tools
to go where they have the facilities of logic programming like a prologue or a datalog or a
mini-canon or some sort of constraint solver that you could use at the level of your domain problem
if you need to,
or you could also use it at like a macro level
or is it a black scheme
where you have a tower of interpreters that you build?
Is that the one?
Yeah, that's the one, yeah.
Yeah, so if there's
something like that or like a like a like a dependent typed kind of environment where um
the same tools that you can use at one level you can use it another level up and i don't know that
i've ever seen anything like that i don't think made quite so uniform so it feels like one
experience but there are many situations where you find logic programming
embedded in another programming language.
And that is typically how I personally prefer to use logic programming.
So rather than turning to Prolog, for example,
I would prefer to use MiniCamron from Scheme,
or Oleg has some very nice work on embedded logic programming
within ML Dialogs, especially OCaml and so on.
Because I find that's actually a much nicer way to program.
What you really want when you are solving problems is an ability to move more along
a kind of continuum from very declarative things where you're at the far edge and you're
essentially writing logic programs down to extremely imperative things where you're explaining
to the computer exactly how the bytes must go in the record that you're, you know.
So, for example, if I'm doing graphics programming, it can be very frustrating if I have to, you know,
marshal a new data structure and return it out of every one of my functions, because the buffers
that I'm moving around then are going to cross these different memory boundaries. There's going
to be a lot of allocation and deallocation happening, and I'm going to have terrible
performance. So in those situations, I need to be able,
if I want to write that kind of code,
to drop down to a layer at which I'm just slamming bytes into a byte array.
But that's not how I prefer to write most programs, right?
So optimally, ideally, I want to be able to slide back and forth
along that continuum as needed,
staying as far towards the declarative side as I possibly can,
and only dipping down into the rest as needed.
And that's an idea that, to me, most immediately calls to mind
the other FRP, the functional relational programming idea
introduced at the end of the out of the tar pit paper where they have a programming language slash environment that's split into three separate
pieces that each fit together where you have one piece that is expressing relationships between
entities in the system and then another piece that is low-level, very imperative code
that does sort of performance-sensitive work with the data that has those relationships.
And I can't, off the top of my head, recall what the third piece of it was.
I seem to remember it was a pretty bland functional layer, wasn't it?
Yeah, something like that, yeah.
And that's exactly what I'm talking about here, is the same thing, of course.
Well, and my question is, I believe there was only ever like a toy implementation of that idea brought into existence.
I'm wondering what environments you've seen that do a very good job of spanning all the way from that highly declarative end of the spectrum, the logic programming end of the spectrum, all the way down to very precise
control over what's in my buffers, what's being swizzled, that sort of thing.
So the way that I prefer to deal with those situations is rather than trying to make
a single monolithic programming language that can do all of these things, I prefer to work in a
programmable programming language that will allow me to build DSLs at these various levels of abstraction
that can all work together.
So for example, a good fast Scheme implementation like Shea Scheme with something like MiniKameron to work more towards the
declarative side and the ability to actually allocate a bunch of bytes and set them in a tight loop can get the work done.
Historically, probably the environment that I found that's most agreeable for this
is some of the better common Lisp implementations, because you can drill down to finding out
exactly what code is being written for you by the compiler based on the code that you
have written and give type hints and do all the things you need to do at the bottom level.
But you can back off and build a tower of abstractions that is as tall as you wish it
to be. Yep. That's as soon as you said that I, I, uh, I had the thought,
Oh, of course that's the ultimate, uh, kind of smart alecky answer to, uh, to give on this
podcast, which I think there's a lot of the listenership who, who have the, the idea in
mind that what we need is a giant comprehensive solution that spans from one end of the planet to the other.
And no, of course, it's Lisp, it's composition, it's small pieces that you can build your own abstractions out of.
Yep, that's great.
That's certainly my preference.
I would say that there are other people who have been at this even longer than my ancient gray bearded self,
who prefer some of those layers to not look like Lisp.
For example, this is how Alan Kay's team generally approaches any new thing,
is they build a tower of languages where each language has different syntax and features
relative to the language beneath it.
And so if, for example, you go look at Maru, a very low-level Lisp-like language that is used to bootstrap most of these experiments,
you'll see that on top of it, eventually they end up building something, unsurprisingly, resembling Smalltalk.
And often on top of that, maybe some other DSLs that are even more specialized.
And the idea, of course, here is that if you can build a tower of DSLs and write all of the parts of your system in a language that is very agreeable for that task,
that you don't need to write nearly as much code because you don't spend as much time in the awkward
part of your programming language so i'm gonna throw a curveball at you uh hopefully it's a
curveball um where have you seen or what would you like to see for building such towers of
abstractions in a visual paradigm and i I know you've mentioned just now Alan Kay
and that you end up something at the very top
that looks kind of like small talk.
But if you're interested in having a visual experience
that goes all the way from one end to the other
and you are interested in tying together those principles
from graphic design and animation
and using them to get more information to and from the programmer and the computer than you'd get from just the grid of colored characters.
What do you think that would look like or what would you want that to look like?
Well, I would start by saying that one of the differences I have with Alan Kay on this is that I do prefer very uniform syntax at all layers,
whereas he prefers the syntax to vary more by layer for aesthetic reasons of his own.
So for me, I would want something that I could express well in a visual environment that translates across all of these domains.
So if I were building such a thing myself, I would maybe build it on top of Shea's scheme, but build an interface that looks something like scheme bricks or scheme blocks, which I think is actually a kind of nice, aesthetically pleasing variation on, say, the scratch idea of blocky things plugged together kind of programming.
But again, with a very uniform syntax that makes it visually quite easy to see what's happening
in all the different layers. So in that sense, you would gravitate towards a visual language
that is still rooted in written language rather than rooted in the language of visual arts?
Well, I don't think those are actually different worlds. So if we study graphic design, most of it is about type.
So for example, I don't think that in such an environment I would want everything to
be represented by small blocks of monospace type.
We can still use vertical and horizontal alignment of elements and have them, based on what syntactic
structure they're describing, look different from each other. And when we look at other human scale notations for
complex things, for instance, traditional mathematical notation, which admittedly has
many problems of its own, there are things like writing a for loop as a capital sigma with the
beginning and the end at the bottom and the top and so on.
So I don't see any reason why when we build, say, a map over some domain in our language
that it can't have a visual representation that looks like that's what it's doing.
So the uniformity in the sense of making everything into a little box like Scheme Bricks,
that would be a starting point to
bootstrap the idea. But ultimately, I would like to have really beautiful typography. And I would
like to use all of the different things that our visual cortex can to make things more obvious to
the end user as they're programming. Yeah, this is something where I feel like there's a lot of
unexplored territory yet. Oh, so much.
Yeah.
Well, especially as we have new projects
like Media Molecules, New Game Dreams
that just came out,
and VR in general as a kind of a...
It's not an...
I would say new frontier,
that's kind of a tired term,
but it's a space that,
because it is a new modality of interacting with a computer,
of doing input and output, it's encouraging people to be very creative in how exactly
they translate ideas from the mouse keyboard screen world to the VR goggles and controllers in your hands world. And so I'm seeing a lot of very interesting
ideas that are, my big hobby horse is making execution visible, is in whatever you're doing
in any language, whether it's visual or textual, the more you can do to make the programmer
participatory in the execution of their program the better in just providing
their visibility into how it executes and their ability to debug things. And we see some of that
with Elm's time travel debugging where you can roll backwards and forward. But I feel like that's
a direction that we could go much, much further. And so what I'm curious about is for the block languages,
what does the future for block languages hold? And I'm not necessarily asking you, I'm just
putting this out there as a rhetorical kind of thing. Like I often wonder to take block languages
and to make them have more of the principles of graphic design and to make them leverage more human modalities and human
ability um will it just be a matter of moving away from what scratch has where it's you know
it's intended for children and it's kept very simple um but at the same time the regularity
that it has is you know a benefit to all human beings not just children um would it be moving
away from that for the sake of more expressivity or would it be something where like in scratch
one could imagine um or in a block language one could or scheme bricks one could imagine
trying to bring animation into that and your limitations in doing that are pretty severe, I would think, in that you're not going to be necessarily moving elements around.
You might be hiding or showing elements, or you might be recoloring them, or you might be lighting them up to indicate some sort of transient state that they move through as the program executes in slow motion or whatever the debugging experience is like.
And I sort of wonder whether that would scale all the way up to covering the whole range of what you
can do with animation and graphic design. Because my personal inclination is to always go for
something that is in the direction of the nodes and lines, you know, pure data,
small talk to a certain extent, school of visual languages, just because at that point,
you are in free space. You are in, you know, force-directed graph space, spit. You are in
the space of having an open canvas in which to move things. And as soon as you have an open canvas,
you have not just the structured principles of design, but you also have the structureless,
free, artistic end of our culture that you can bring to bear on this. And of course,
going in that direction means you are sacrificing a lot of that uniformity that you said that you prefer, a lot of that consistency, or maybe not necessarily sacrificing it, but if you're bringing in the artistic sort of cultural side of humanity, things get very subjective. So I would say this is one of the cases where I'm very happy that someone else
is excited about a particular direction because it deserves the attention, but I am not the person
to give it that attention. Because thus far I've found, and it's been a fair few of them I've played
with and even a couple I've constructed, node and box kind of programming to be more frustrating as the complexity of the of the
thing i'm trying to express ramps up that i'm willing to tolerate yeah it's terrible yeah so
for simple things it's amazing and then it crosses a threshold much more quickly it's like if you
drew two two graphs of complexity versus you know writing something in scheme versus doing something
with boxes and arrows box and arrow starts off easier, but it crosses Scheme's increase somewhere midway along the curve
and then goes to Mars. I don't even know I'd agree that it's easier to begin with.
Everybody likes to say Box and Arrow languages, oh, they're easy when you're working in the small,
but when you're working in the large, they're a nightmare. I think they suck when you're working in the small, but when you're working in the large, they're a nightmare. I think they suck when you're working in the small.
Like even having to do,
and everybody's pet example is arithmetic.
Would you rather say plus two, two, or would you rather type N and then type plus
and then type N and then type two
and then drag a line from the plus to the two?
Well, sure, in the arithmetic case,
it would be terrible,
but many of the situations
in which
I see people using this kind of programming is a sufficiently high level of abstraction with the
right kind of boxes that they are actually primitives of sufficient power to make it
pretty convenient for some simple things, especially for my friends who are designers
or musicians who want to use max MSP or want to use some visual node environment to do what they're doing,
they find it less intimidating and more clear than a page of text when they start.
But when their patches grow past a certain point,
even they don't want to deal with it anymore because it's just too much.
And I find also on the data visualization side that if you have a big directed graph,
it's fun to make a picture of your big directed graph, but it is not particularly communicative. You know, at some point it turns into kind of data art and it's cool to look at,
but your ability to actually reason about what it's telling you is essentially zero. It approaches
zero as the number of nodes increases, right? So that's my fear about that whole school,
but I definitely want other people who are excited about it to work on it and come up with something amazing.
Yeah.
And the idea that you are tapping into these very powerful primitives at
first.
And so that creates a lot,
or that offers a lot of leverage to people who are artists or musicians and
not programmers.
Like,
I think that's the whole reason why max MSP and pure data and VVVV and all of them are as successful as they are.
Yeah, they're extremely batteries included in that.
That is very important.
And to me, that seems counter we currently live in a world of tools that are made for
beginners and pretty much exclusively for beginners.
And they might permit use by experts.
And I think Rich Hickey gave this example.
When you pick up a cello, your first couple of weeks or months of playing the cello, you're
going to get blisters on your fingers and it's going to be awful, you know, strangling a cat kind of a sound. Um, but you, you move through that
initial period of great adversity to get to the point where you are a masterful cellist and you
can create this transcendently beautiful music and computers, the computers that we have in the
software that we use are for the most part, they're the ukulele that they're very easy to pick up and strong and have a good time with,
but you're not going to be playing a solo in front of a symphony orchestra,
doing some, some great Shostakovich or something like that. That, that design prioritization for the novice instead of, okay, for all the other computing
stuff you're building, for the operating system and the window manager and the web browser and
all these sorts of things, you have to think about the person who's not a capable user of the
computer. But for programming tools, this is our one chance to really indulge ourselves in creating
something that requires mastery. And yet we keep stumbling over that need to make things immediately adoptable by the beginner,
to put in things like two-way data binding.
You don't even need to worry about how data flows through your system,
just say point A and point B are the same, and if you change one, it changes the other,
and there's that magic glue between them that works great in the small and does not scale up to the large.
And so I sort of,
I wonder a lot if we've ever seen
a real visual programming environment
that is designed to sustain mastery
rather than just cater to people
who need to tap into batteries included power very quickly.
Have you ever seen an environment like that?
In the visual space, I think no,
although there are some that scale more gracefully than others.
For example, processing is quite approachable,
but you can do very sophisticated things with it.
It's not a visual programming language in the sense that, say, pure data is. But this would be, to make something that gives you that scalability and that,
but while retaining that ability to understand what you've done
and not just have a massive squiggly box of things
that you don't really understand how they're rooted anymore if you go away for two weeks,
that would be a fine goal for someone building such a system.
But winding back slightly, I would say that I would characterize the situation slightly differently with regards to the situation being everything optimized for beginners.
I would say instead we have this very bimodal distribution where we have the massive computing tools are built for absolute beginners.
And then we have the unapproachable beginner hostile tools that are almost arbitrarily configurable.
As the ultimate example of this, I have been using Emacs for 34 years.
And my Emacs is an amazing thing that is tuned to my exact preferences.
But when I recommend it to young programmers, they frequently fool with it for a week or two and then go back to something like VS Code line ones, but the whole category visual programming
languages are tools exclusively for beginners. And perhaps that's related to, or it's caused by
the fact that they have that scalability problem. Is that, I get the sense that you don't agree with
that. Yeah, I don't think it has to be so. I don't think it must be so. Certainly there is that
perception. I absolutely
agree that there is. And part of the problem, I think, is that expert users who have a background
in a different programming paradigm get frustrated when the expressive problem starts to happen
and fall back to their other tools. And as a consequence, they never forward forward through
that sort of veil of difficulty to find what's on the other side and so what you need are some very motivated expert programmers who just want to be able to use
those kinds of tools and are willing to put in the effort of figuring out a way to make them pleasant
past that that point yeah and so what would it take for you to leave emacs behind and to use a
visual programming language primarily as your primary
means of solving programming problems? That's a very good question. And if I knew the answer,
I would probably build that system. But this is actually an exercise that I'm interested in
working through with you, if you'll indulge me in that. So there are some things already that I've been working on
in this area that we can talk about. Sure. One of them is that three of us here in Berlin together
tried to make an environment that would be good for beginner closure programmers called Maria.cloud.
And so Maria, which is named after Maria Montessori, a teacher whose philosophies we found
quite fetching
when we were thinking about how we would approach the overall pedagogy,
because it's both a programming environment and also a curriculum to get people started.
And we pirated from the best.
We took the visual programming language that comes by way of both Racket and Scheme in the SIGP era and so forth,
and some other things like this so that we can get people started with shapes and colors
and more interesting things than just adding things
and finding the fixed point of some equation,
which is fairly alienating for most students
who don't have a mathematical background.
And in that environment, we have done a great many things
to try to make shapes and controls first class
objects and be able to wire them into the code and there's a data flow library built in so that
even very beginning programmers can use a thing we call cells which is a library patterned on data
on spreadsheet cells and this allows some very nice things to be done by people with very limited
programming background and we've used it to teach in a context called Closure Bridge, which is an international organization that tries to get people started programming in Clo and use now, is actually part of a longer-term project that we are still working on,
which is to try to use it ultimately as a springboard to experiment with many different kinds of program editing,
where the underlying mechanism will be some flavor of closure,
but the overlaid representation could be almost anything.
And so we're going to try to make some more visual things in the upper layer of Maria and have multiple possibilities, multiple views under the same code in much the same spirit as Luna.
Very cool.
And that was actually one of my questions was why is Maria a literate editor right now?
What made you start with that approach?
So the history of that project is that Matt, who did most of the actual day-to-day programming on
that project. That's Matt Hubert? Matt Hubert, yeah. Available at a Twitter near you. He started
talking with me about this because he was very interested in trying to do something that had the power and approachability of a spreadsheet, but where the actual programming was done with a language he liked to use.
And so his first project in this series of deliverables was something just called Cells, which eventually became the Cells Library in Maria. And this was a very spreadsheet-like idea where you could make little widgets using
actually Clojure script code because it all runs in the browser and wire them together and write
various sorts of programs. And over time of experimenting with it, we found that something
more like a IPython notebook or like Mathematica where one could intersperse text was even more interesting
because then you could share your thoughts with someone else but intersperse drawings that are
made by code that's embedded in the thing including things that might fetch something from an API and
so forth and so on and when we noticed when we were trying to help people learn Clojure that
we could easily wait waste four or five hours just getting somebody's old funky laptop to get set up well enough to run a basic editor and have a jvm and run some closure we thought well
why don't we try to make this into an environment that beginners could actually use and uh and
that's how we ended up at that's how we arrived at marina.cloud but then once we had that we thought
well it could be a lot of other things as well. It could be a springboard for all sorts of research into programming interfaces.
That's super cool.
Yeah, and I've played with Maria, and I actually, the cells that underlie it, it seems like a pretty interesting data flow model.
Could you explain a little bit about how that works and what it is?
Sure, yeah. So one of the problems with notebook systems as they are often practiced is that the
data flows from top to bottom when you're writing the cells, but then when you go back and change
something in a cell that's higher in the document, the change doesn't flow back through. So what
tends to happen with these kinds of documents is that over time
the state that is represented in the dependent cells or the dependent
code blocks is
no longer in sync with what has happened above them and the whole document grows less and less aligned with the ground truth of what's
happening in that computer program slash document.
So
with this data flow environment,
you have a certain data type, a cell,
that is like a container for another kind of data type,
or a box in traditional scheme parlance.
And when one of the values in one of these boxes changes,
it propagates it to any other box that refers to it.
So of course we have a graph in the background
that knows what's referring to what,
and we propagate the changes down through it.
And you can see a very similar model in action
in Observable HQ, which came out a little while later
and used a more popular programming language,
and it delighted me to see it.
I would love to see every JavaScript programmer
become more accustomed to using a sort of data flow paradigm
in these situations, because it does simplify many things yeah observable i think is a is a great touchstone for people who
haven't seen maria um though i would say for anybody who who is hearing this and thinking
you know oh i've seen literate programming before and i'm you know i've played with observable i get
the idea uh one of the things that I really like about Maria
is it's pleasant to look at.
I find Observable is a little bit fiddly.
They've put some extra user interface around the text
and it doesn't read as cleanly
as I think code in Maria reads.
It feels like you guys have paid extra attention
to making the environment be a nice feeling space to work in.
And I really appreciate that.
That's back to the same design aesthetic that we try to bring to everything.
The three of us have actually made a small consultancy now because we are in such a violent agreement about both programming aesthetics and also visual aesthetics
and how approachable things ought to be and, uh,
how things should be presented to users.
And one of those things,
which you've just touched on is that user interfaces,
I feel at least should provide the least possible Chrome.
There should be no visible control that you don't need.
And yet a way to get to everything that you do need.
Yep.
That's,
uh,
so one of the topics that keeps coming up in our programming community is that we have a lot to learn from video games. And I think this is one such thing that there was a period in video games where Chrome was very minimal because people's experiences with video games were, you know, they were a new field. And so we weren't ready for the Xbox controller with 20
different face buttons. We were in a, you know, a much, much simpler period where, where the main
interface was maybe your mouse and the screen or before that, you know, arrow keys, um, very,
very limited input, very limited output. And then we hit this point where, okay, we have all the
keys on the keyboard every one of them
does something you can hold down modifier keys and hit every key on the keyboard and that's how
you have your very very uh sophisticated spaceship simulator um there was a famous xbox game that
came with this control surface that you spread out across your lap and you had to you know flip
open a little panel to hit the self-destruct button. And in going through that exploration, we've, as a community of game developers, learned that there are some experiences, some stories, or some gameplay styles that benefit tremendously from the aesthetic feeling of that complexity. If you are flying
a spaceship, you want there to be a thousand different controls because that's part of our
culture of what it is like to fly a spaceship. That's part of what makes it compelling.
And then there are other experiences like puzzle games, like The Witness or like Dead Space,
where you're marooned on a spaceship and it's very spooky and you're all
by yourself where putting anything on the screen breaks your suspension of disbelief and it takes
you out of the experience of being in the world. And so I feel like that's yet another lesson that
programming can learn from video games is that the choice of chrome that we surround our tools with it's an aesthetic
choice it's not just a choice about functionality and i think i skew to the opposite end that you do
where in the the visual environment that i'm building i am putting in as many different buttons and control surfaces as I can because the aesthetic
that I gravitate towards is that of 3D modeling tools like Maya and 3D Studio Max where they have
hundreds and hundreds and hundreds of different commands. And that there's an experience you can design around that kind of an impression that I think it conveys this is a tool
that you will need to invest time in learning and that this is a tool that will meet that
investment of time and reward it by giving you lots and lots of capability where on the sort of
the Maria end of the spectrum where there's very little chrome, it has this almost meditative quality where you're meant to disappear into the experience of reading.
It's meant to feel much more relaxing and enveloping rather than something that's meant to sort of stimulate you and charge you up and say,
okay, there's a thousand different things to do. This is meant to be soothing. And so I just, I love that about it. You've really hit upon the feeling there as well.
When Matt wrote a paper about cells that he presented, I think 2016 at a conference in Rome,
the opening paragraph was something to the effect of when I opened my editor, I feel calm.
And this is, this is very much for him. This is what is what he he this is the emotional space that he
wishes to inhabit when he is doing work is one of serenity and that helps to direct uh the user
and face of maria definitely that was the title of his paper wasn't it it might even have been the
title yeah yeah um and the the other person who you are working with is Dave Liepman? It is, yeah. Yeah, what's his role in this collective?
So Dave came to us from a background working on IBM Big Iron for a long time after he got out of school,
where he studied computer science and cognitive science and philosophy.
And he fell in love with Clojure some years ago, I think maybe seven years ago now.
And yeah, he's a very active part
of the Berlin closure scene and an old friend of mine. So we all got together and we work on all
of these things. What sort of project are you working on next? Or what can we kind of look
forward to coming out of your studio? And the studio is named, uh, applied science. Is that
in the URL is applied science dot studio. There's the grotesque plug for the...
Awesome.
And just for the listener, the logo is so good.
You should go and look at it.
It's a very, very good logo.
Thank you very much for that.
I did everything I could to make it look like something you would see on a piece of lab equipment in the background in 2001 Space Odyssey.
Exactly.
Yep.
So what are you guys working on?
Is it more development for Maria? Is it that visual front end or is it also consulting? What are you up to?
Well, Dave has written most of the curriculum we have now and I think he's going to write some more curriculum for Maria. And we are currently overhauling the editing experience and the underlying libraries of Maria partially to enable us to then do the more visual
programming thing. So right now the interface is a little too tied to kind of parse on a keystroke
sort of things. And we want to move to a directly editing the AST sort of a thing where we serialize
it out to different representations. And we're pretty close to that.
Again, it's Matt doing most of that day-to-day programming on that project.
And it's starting to shape up and look pretty good.
So there'll be a big release in terms of code changes that makes almost no difference to
the day-to-day experience of using Maria except having fewer bugs.
But then we will start rolling out experiments with it.
On the side, we have a bunch
of cute little things that I won't talk about today because they're not out yet, but that are
coming in the next few weeks that are just fun toys for people to play with, with some additional
source code that they can look at and learn from of different things you can do with Clojure,
different ways you can host it in the cloud, things like that. Cool. And are those unspeakable toys,
are they in Maria or is this just for people who are Clojurists?
The next few things will be general purpose tools written in Clojure that are not constructed with
Maria. Later, I think, so one of the things we built when we first rolled out Maria to a class
was a service where there was a publish button. And so you could write a sketch in Maria that was a fully functioning UI of some kind of program,
select the cell that created the user interface, and say, okay, publish this. And we would spin
up a web app somewhere that did a hosted version of whatever that document did with a link back
to the original. And this is something like what Glitch does, if you've seen
Glitch. But the idea was that there is so much incidental complexity between a new user and
writing a computer program and sharing it with their friends. And I feel that one of the things,
like many of us are internally and intrinsically motivated to play with computers. We like solving problems. We like
the things they can do for us. But many other people would love to be able to do these things
if they could share them with their friends, right? Because they're very socially embedded
and they want to be able to take the things that they've made and make a friend smile with
something. And so I felt like, and we all felt like it would be really valuable for new
users to have that experience as early as possible so that they could bootstrap them into more
excitement about doing this kind of work. So that will likely return sometime either late this year
or early next year. And then there is an offline experience, which is a completely different thing.
Not completely different.
It's built on the same bones, but it will be an actual kind of environment that one can run locally and build projects with called Lark, that we are building on a Lark.
That will contain many of the sort of projectional editing and other sorts of features.
And we're all very excited about that,
but it'll be a little time yet until it ships.
Yeah, that's interesting.
So that's for, rather than Maria as it is right now,
which seems sort of like an environment for learning
and maybe exploring simple ideas,
this would be something meant more for doing serious development?
Absolutely, yeah. So the idea is that many of the things that we've done in Maria are not
specifically for beginners. They're just better ways of doing the things we do. And so they
benefit experts as much as beginners. And there was a presentation that we did at a closure
conference a couple of years ago in which I did a little five-minute section of the presentation on Maria for experts, just discussing how much raw power is available to someone in that
environment if they already are very aware of how all of these things work and do already know how
to program. So our idea is to take the things that are just better for everyone and then build more
tools for experts. Again, that scale across skill levels so that you don't have a sharp disconnect
between things for beginners and things for experts. There's an old saying in the Unix
community that you should make the easy things easy and the hard things possible. And that's
really one of our targets here. Another permutation of that idea that I'm very fond of is Apple's
approach to accessibility, where they say, we will do things that make the phone more usable to people who have different levels of ability,
but that all of those things that we do, they're not specific to that person or that need.
They're things that anybody might benefit from. So adding support for people who might need like an
assistive device because they don't have a lot of dexterity in their hand, for example, that same
feature might benefit somebody who just wants a more comfortable way to use their iPhone or things
that help people who, you know, they might need reading glasses or something like that, the ability
to adjust the text size that might help somebody who just wants to be able to hold their phone further
away. And so I love that, that thinking of, it's not just about enriching the beginner experience.
It's that, um, things that make something easier for beginners can be created in a way where they
benefit everybody. And, and the contrary is of true, where if you improve the experience for everybody
in just the right way,
it also makes the beginner experience better.
And that's exactly our target
on everything that we're doing.
We do not in any sense want to build
a kind of padded chamber in which to lock the noobs.
It's not that kind of project.
What we're trying to do is simultaneously
make something that's good for new people
and that pushes forward the possibilities for experts as well, just for better ways of doing the things that we all do day in and day out.
I met you through the Clojure community, and you've talked a bit about some of the different communities that you've been a part of through your life and some of the different tools that you've worked with. Are there any language communities or programming communities
or computing communities that you're aware of that are interesting to you that you think
people should pay more attention to? Or things from the past that you would say,
hey, here's this paper I read, and I think everybody should read it too.
So I think that there's wonderful work being done by the Pharaoh Smalltalk people,
and everyone should be paying attention to what they're doing,
especially around developer tooling.
It's beautiful work.
I love the Racket community.
This isn't an entire community,
but there's a specific person who's doing some wonderful work on structure editing
who I want to mention here,
which is that I recently,
and only recently became aware of,
Fracture by Andrew Blinn.
He's Disconcision on Twitter.
And that work is really beautiful.
It's very aesthetically motivated,
very beautiful editing environment
that attempts to give you the ability to edit
that you would have with plain text, but in a more beautiful way in a structured editor.
And I recommend everybody take a look at that as well.
Another person whose work I really love is Andrew Sorensen, who makes Extempore,
or Extempore. I'm not sure how he pronounces it because I haven't heard him say it,
but I've read his papers and they're lovely and the environment itself is lovely. What he's done is to create a
kind of two-level scheme language with extensions that uses LLVM to cross-compile to different
hardware platforms. And he's got a lower-level scheme where you have to manage your own memory
and so on, but you can do all sorts of signal processing and other low-level stuff in it, and then a higher-level scheme
that restricts some of those things
but is extremely alive as a programming environment.
And he does a lot of music with it,
and he has a really great paper
on something called temporal recursion,
which we touch on in the Maria interface as well.
If anybody watches the Maria for Experts subtalk,
I give an example of this
where you can create a recursive function that recurses over an interval in time so that it
develops and evolves over time. And then you can apply all of your normal operations to it.
For instance, you can have a random number generator that produces a random number
every second. And you can send that through a temporarily recursive function that
builds eventually a bounded sequence of the last 10 values and then turns it into a bar graph or
something like this and and this approach is really lovely in an environment that has a good
visual layer over it oh and if i'm plugging things i always have to plug sam aaron who is doing
fantastic work with sonic sonic. He's a good
friend of mine, and he's doing amazing work, and everybody should support him monetarily as soon
as possible. Go to his Patreon and give him money, because Sam is doing God's work.
Yep, yep, yep. Do you know, is he still working on Overtone at all, or is that on hold just for
the sake of Sonic Pie? He's full-time Sonic Pi these days. I think a community of people have arisen
who keep Overtone bouncing along.
Cool.
There is one thing that I would love to see in the links,
and that is that there was a French designer and hacker
who wrote a really great paper
about using the lessons from the grammar of graphics
in the context of representing computer source code.
And I would love more people to see that paper,
because as far as I can tell, outside of his
narrow discipline and computer
scientist in his own country, it was not very
widely read.
It really, in a
very short paper, covers
the things we're talking about, about spacing,
about contrast, and so on.
You
shared two papers with me.
One of them is a visual perception account of programming languages,
finding the natural science in the art.
And then the other one, unifying textual and visual,
a theoretical account of the visual perception of programming languages.
Excellent. Those are the very ones that I would love to send more people to those papers.
On the topic of papers that we love, one of my favorite papers is by Paul Fishwick.
Yeah, who was a professor of mine. Yeah. I wrote my first ray tracer in his graphics class in
probably 1991 when I was in grad school. Yeah. Yeah. And it's, it's a paper called, uh, aesthetic programming.
And in it, he, um, outlines this just this wild idea that you can pick any kind of visual
representation you want. It doesn't have to be based on those, those kind of the principles
of design or of animation or anything like that. It doesn't necessarily need
to have those sorts of those first principles underneath it. You can pick anything. And if
there's aspects about that visual representation, and I guess this would also apply to a sonic
representation or whatever kind of representation you want. If there's an aspect that you can
uniquely identify, you can create a mapping between that aspect and some aspect of the lambda calculus or whatever programming model you want to be working with.
I would say about that particular idea that it seems obviously true in the sense that, like what we get from Hofstadter, for example, about analogies as fundamental to human reasoning. If you can
generate an understanding of a correlation between two things, then humans can reason
about it using analogy very, very well. And it seems to be one of the ways that we work.
And also one of the ways that, for example, neural networks operate. When we train them,
we can teach them correlations and even long-range correlations
between different kinds of patterns. I think this is something fundamental that probably
terminates in information theory. Or if you think about information theory as a generalization of
probability or as its relationship to first-order Markov chains and so on,
it becomes pretty clear that that's really part of how we
deal with perceptual information over time. And so whether it's glyphs on a screen or whether
it's sounds we're receiving or learning how to speak a language, maybe with no resources,
ending up in a country where people speak a different language and speaking up on the go,
these are all examples of us doing that, of inferring analogies between things across domains
with very little
information to go on. So I think, yeah,
it's very clear that you could
start from anything.
Whether you should is another matter
entirely, but it's
just such a neat idea that's stuck out in my
head all this time.
And will you be trying to explore things along those
lines in the context of some visual programming?
I'd love to see the experiments.
Yeah, unfortunately, oh, hell no.
It's a big ask, I admit.
Well, and it's also, it's,
I feel like the approach that it advocates
is one of imagine any kind of depiction that you want, any kind of interface, and work from that back to the model.
And while I'm not a mathematician, I am an artist, I am very amenable of the idea that you probably want to have a little bit more certainty about what your underlying model is and establish that part first
and then figure out the depiction that you want.
Otherwise, you're going to end up
with all sorts of red herrings
and just baggage from misunderstandings
and it would be a lossier way of working, I feel like.
But just not to throw it completely out the window,
I feel like it would be wonderful for the creation of art pieces, like programming as an art practice
rather than programming just as a way to, you know, raise venture capital money, which is
not a very interesting objective from my perspective. Yeah, no, but it's, it's the one
that seems to get all the attention.
This mention of sort of obscure things that you learn to associate with other things in order to make art has reminded me of something that I probably would have wanted to plug before,
but I definitely think you should look at,
which is an audio environment for live coding music called, I think he pronounces it Orca, although he does this
design thing that I hate, which is that the A in Orca is actually a lambda. And so I want to say it
as Oracle, but the GitHub username is 100rabbits. Yeah, they're fantastic. Yeah, everything that
they do is great. has just has such a beautiful
consistent aesthetic through all the tools they make and i i love the way they approach everything
and so yeah this tool strikes strikes me as something that really does come from that place
if you just have to learn to analogize the things happening on the screen to their effects over time
and you get wonderful things out of it yeah you know and 100 rabbits which is um i can't remember
his last name but lou linvega something linvega i think but i don 100 rabbits which is um i can't remember his last name
but lou linvega something linvega i think but i don't know if that's i don't know the if this is
an actual name or if this is a trade like a like a stage name and i don't care yeah exactly yeah
yeah yeah him and his uh partner um i believe at one point they were living out of a boat they're living in a sailboat and and from the sailboat they are um just you know sustained by funds from patreon and and
wherever else are just producing absolutely wonderful tools for creating art and for
writing not not writing code necessarily but like building interesting systems that do interesting
things and um yeah there are two there are two figures that i think um that that our community
could learn a lot from not just because of their incredibly potent sense of aesthetics but also
their their way of approaching programming as as an art in itself and as an art practice and what that looks like
just as a way of getting out of Silicon Valley's death grip on our field. Um, cause I think there's,
there's a, a lot that we can learn from looking at the past, the 1970s. There's a lot that we
can learn from looking at other fields related to programming like video games. Um, but there's
also a lot that we can learn by
looking at people who are using programming. Like even the way they use GitHub is interesting and
different from the way I use GitHub or the way most people use GitHub. Like their commit messages
are, um, an asterisk. Every commit message is an asterisk. Um, like there's, that's, that's an
aesthetic choice. That is, you know,
he's not going to be using get bisect to figure out, you know, Oh yeah, this was the commit where
I made that change. It's, it's something where, um, every choice that they make about how they
use the computer is a marked choice that is meant to be interesting for the people who are aware of the work that they're producing.
And it just leads to this, you know, like a very cohesive vision that I think is just fascinating.
Yeah. And I absolutely agree with you that it's not just their very well-developed aesthetic sense.
It's also their willingness to just come at everything from a different angle than anyone else would and to be to be willing to do that because so much of what is built is just a clone
of something somebody else built with no individual agency injected into the decisions from the
earliest stage and to see some see a team of people just go and make things that really speak
to their imagination is wonderful and just as a a practical level, so the people listening to this show
are probably thinking,
well, that's all fine and good,
but I want to make tools that people can use
to solve real problems.
Like we've got the climate crisis or what have you.
So, well, it's all good to be having fun
making very, very cool character grid synthesizers
that are kind of programmable,
but also kind of like an art toy?
What can I take away from their work? And I would say that one thing that they excel at is making
an immediate impression on people and having like a masterful degree of control around how somebody
who's learning about their work for the first
time comes to understand what it is. And something I see a lot of programming tool developers
struggle with is that first moment where an audience arrives at your tool for the very first
time and they have to learn what it is that your tool does. And no matter how much, you know, material design you clothe the web page in,
and no matter how nice the rounded edges and the drop shadows behind your code samples are,
there's, I think, a real gap between what programming tool developers think they need
to say in order to sell their tools versus how people
who are encountering something for the first time need to be treated in order to really get what the
tool is about. And I feel like these guys do it masterfully, whether it's they're announcing tools
by posting GIFs of them. And so you can, you see the, since it's a thing for playing with, you see somebody playing
with it. Or if it's, if it's for making music, like not only are you seeing somebody making the
music with it, but you're hearing the music, um, like all of their presentation, it's not a,
a narrated video. Like, you know, this is the tool for doing X. We included these features. It's, they make the tools really fun and playful.
And then they invite you into those tools
by showing you people playing with those tools
and having fun.
And I think that there's an immediacy to that
that's missing from a lot of the stuff we do.
And I feel like if somebody's looking for a tangible thing to take away from them, that would be something I would look at first.
I would say two things motivated by what you've just said.
And one would be that I find the culture of using bootstrap and material design and so on a kind of grotesque capitulation to sameness.
I don't think it's actually a good thing that people have given up on the idea of improving what they can produce visually or in terms of interaction in exchange for not having to think
about how they're going to do any of those things. And the second thing is to fugue back to something
we talked about before when you were talking about video games.
I think that no discipline in computing is as good at taking a new user who knows nothing about what they're about to experience and guiding them progressively to mastery compared to video game introductions.
They don't explain.
They just drop you in the world.
You begin working and you learn as you go.
And it gives you the right hint at the right time to become an expert in playing that game
and it's it's awesome to see when that goes awry like video games that you drop into them and they
throw up five you know walls of text explaining you know first you're gonna click on this and
then explaining what that's gonna do and it's like that builds your power meter and yada yada yada and then you click okay and then another
screen comes up and says after you've clicked your power meter thing and built up your mana
yada yada then you have to go to the store and like when when tutorials and games go wrong it's
just it's such a powerfully negative experience that it's a great way of showing by contrast how great video games
normally are at this um so totally totally in agreement with you there one way that i'll play
devil's advocate though is that um video games are very um not only are they constrained in the sort of the domain that they span, but they also have
the ability to selectively widen and narrow the domain of things you need to know about
at any time, because it's a game, they can do whatever they want. So the very beginning of
the game, they can say, okay, use the left thumbstick to move forward and backward. And
you don't have to
know that later on in the game, you're going to be playing the controller like it's a bassoon with,
you know, 50 different fingerings. At first it can be very, very, very simple. And so I wonder,
do you feel like programming tools are in some way different in that they kind of need to be
unconstrained in
their domain? Or is that something that we might be able to work around as an industry? And if so,
how? One thing we've been experimenting with in the context of Maria.cloud is the idea of
language levels, where a new arrival to the environment might be in a simpler language level that has more informative error
messages and even factors away certain extremely error-prone parts of the language. An example of
this would be Clojure has a great deal of different or a great many different arities of map, some of
which just return a function that you can pass to a reducer later. And for someone who has just
arrived at the language, it's a bit confusing when you've, what's really happened is you've forgotten to give a sequence to your map function,
but what you get back is a function you don't know what to do with that is now bound to some
variable. So in the lowest language level, the beginner language level, we remove that arity,
and then we add it back in later as the curriculum shows them more and richer things that they might do.
So I think it is possible to do this kind of thing in the course of teaching a language
or introducing someone to an environment.
But as is obvious, because there are experts who will be frustrated by that, you have to
be able to fast forward through the indoctrination.
Yeah.
Skip tutorial.
Yeah.
Yeah. Yeah. So just to once again, play devil's advocate,
this sounds a little bit like it's a feature or an approach that is exclusively useful to
beginners. And that kind of creates a tension against the thing we talked about earlier about
wanting to make things that
are both beneficial for a beginner, but also beneficial for an expert. And how do you feel
that tension there? And is there a way of resolving that? Or is it just, you know,
you make the tutorial and you skip the tutorial and that's kind of as deep as it needs to go?
Yeah, I think that's as far as we're willing to go in terms of compromising the ability of experts in an attempt to make things better for beginners is to sort of make special paths of learning through which they can go that offer them a different language level, for example, or other different features.
But the overall programming environment should be full power for everyone who uses it. Right. Sort of like certain video games that shall remain nameless, of which I am very,
very fond, and I know certain listeners of this show will also be very fond,
where from the very beginning area of the game, you can get to the end sequence if you know just
what to do in just the right way and looking in just the right spot. And the entire course of the
game is teaching you how to see so that when you arrive back at the beginning, at the end of the
game, you go, oh, there's this thing that I didn't even know how to see that I can now see that lets
me get to the end sequence. And that idea of because this is an artificial environment and because people are
going to need to be taught how to live and function in this environment, whether it's a video game
or a programming tool or what have you, you can play those sorts of tricks where you can say,
I know that since I've created something in here that people aren't going to be familiar with,
they're not going to know that it exists until I show it to them. And so I can use that as a way of getting through that
skip tutorial kind of thing without actually needing the Chrome of like a skip tutorial button.
People who have been here before will know that when they hold option shift, that brings up the controls for, you know, manipulating the
outer shell of my environment, that sort of thing. So there's definitely a lot we can do here to kind
of not feel these tensions between these different pushes that we have, you know, using design,
minimizing Chrome, that sort of thing. But there are ways to find harmonies between them
that I think have been, you know, video games
have explored those kind of things,
but lots of other places, the arts as well.
There's lots and lots of exploration in the arts
that I think would lead to similar kind of ideas.
Absolutely. Absolutely.
I agree with everything you just said.
And do you, because i listened to some of the
stuff that you've posted on soundcloud do you have uh like albums that you've put together
well so i uh in back back in the bygone days i don't know how old of a person you are but perhaps
when you were just a child uh in the 90s i toured with many bands and recorded with a bunch of bands
and have major label releases with those but i don't own the rights to any of that music because I had the experience that everyone does
in the music industry. So yeah, so I had all the experiences of playing with, I mean, pretty much
any touring artist you've heard of who was extremely famous in the 90s, I played in a band
that opened for her. That was what my 90s experience was like when I wasn't you know programming computers or
Or competing in sports
Yeah, all that stuff is just gone gone gone
But I am currently because I'm in Berlin and it's required actually as a as a part of the terms of my residence
That I create some electronic music. I've bought an able
bought a push to and a couple of copy of Ableton live Live, and I'm currently learning how to use that,
which given that my musical training,
actually I minored in classical composition
when I was at school,
so it was all string quartets and things,
and I played in the jazz orchestra and so on.
It's very analog, my music history.
And now I'm trying to make friends with these tools,
and I'm an absolute beginner again,
and I'm having a wonderful time.
And probably there will be an album of new songs sometime late this year early next year
awesome um yeah any other plugs just while we're wrapping up the conversation that's all the stuff
at the top of my mind I'm sad to say there's probably the moment we're done I'll remember
three or four things that are desperately deserving but that's that's all I've got at
the moment this is future Ivan uh just recording a follow-up.
True to form, Jack sent me a couple of extra links
right after we wrapped up recording,
and I would be remiss to not include them.
One of them, Carsten Schmidt,
at Toxi, T-O-X-I, on Twitter.
He is a very, very cool programmer
who works on this project called thing, T-H-I dot N-G,
and he has a TypeScript project called Thing Umbrella, and the original Thing project in
Clojure is similar. They're these very, very cool just mono repos full of resources for building computational geometry and interactive
visualizations and um just all sorts of wild ideas and reading his code is like an education in
itself um jack says that he has a series of blog posts discussing his take on closures reducers
as implemented in typescript
that he recommends to our community the one other link that he wanted to sneak in under the wire is
next journal which is nextjournal.com they are doing work on a multilingual online scientific
notebook environment so do go check that out as well. And now back to your regularly scheduled
programming. The ultimate last question that everybody asks at the end of podcasts,
where can people find you on the internet? Well, I'm Jack Rusher. That is my actual name.
And my username on pretty much every service you might care to find me on is my first name and my
last name concatenated with no hyphen or space between.
So you can find me on Twitter
if you like pictures of beautiful nature
and my digital art pieces.
You can find me on Instagram.
I have a website, jackrusher.com.
Awesome.
Well, Jack, thank you so much
for taking the time to talk to me
and to share your perspective into this field that we're all fighting against and fighting for.
And I really appreciate it.
It's been a real pleasure speaking with you, and I hope that we meet someday.
Yeah, that would be great.
And that brings us to the end of the show.
I hope you enjoyed it.
You can find a whole bunch of other episodes of the podcast at futureofcoding.org.
There are also links to the Slack group where a bunch of us in the programming tools and
programming languages community get together and talk about our projects and talk about
our big dreams for what the future of computing might look like.
If you really like the podcast and you
want to support it, leave a review, go to iTunes and just say what you think. That really helps
people discover the show. Tweet about it. You know the deal. Another way you can support the show and
the community is to back Steve on Patreon. He does a ton of legwork to organize meetups, to promote people in the
community to his audience, and to help everyone learn from everyone else. It's a tremendous
service and anything you can do to help out Steve helps out all of us. If you want to get in touch
with me, I am on Twitter at Spiral Ganglion, which is the nerve that connects the ear to the brain. You can, of course,
find the show at futureofcoding.org. Thanks to Jack Rusher for coming on the show,
and I'll see you in the future.