The Data Stack Show - 09: Building the Operating System for Work with Ivan Kanevski of Slapdash
Episode Date: October 7, 2020On this week’s episode of The Data Stack Show, Kostas Pardalis and Eric Dodds are joined by Slapdash co-founder Ivan Kanevski. Slapdash describes itself as the operating system for work. Slapdash em...phasizes reducing the time people spend controlling their computer in relation to the time they spend expressing their intent.Key topics discussed were:Starting Slapdash and expanding on tools from working at Facebook (3:31)Being client agnostic and working with the tools that people bring to the job (7:35)Distinctions between mouse-centric and keyboard-centric users (12:58)Slapdash’s approach to collecting data (16:08)Building Slapdash to scale and using Postgres (19:45)Using a graph model and a focus on efficiency (24:50)Challenges of reducing latency (29:35)Opening up Slapdash to be programmable (38:17)The Data Stack Show is a weekly podcast powered by RudderStack. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Transcript
Discussion (0)
Welcome back to the Data Stack Show here with Eric Dodds and Costas Pardalis.
Today we have a fascinating couple of guests for you.
They're from a company called Slapdash.
Slapdash says that they are building the operating system for work. If you've ever used a tool like Alfred or sort of automated workflows on your desktop from the command line, this will be really an interesting product for you.
Costas and I have actually been using it for the past couple of months. and we had so many interesting questions about how it worked that we reached out to the founders
and asked if they would join us and let us ask them all the questions. So really interesting.
I think one of the things that I'm interested in is kind of like IFTTT. When we talk with them,
there are different categories of data in the product one is the user data that represents
user actions and then the other is that there are a bunch of jobs running because there are tons of
integration so i'm excited to ask them how they manage those two different categories of data
cost us what are you what are you interested in from an engineering perspective specifically
yeah i think the just the use of the words operating
system it's something that's resonates a lot with me you know Eric like operating
systems they have two important characteristics one is their complexity
they are extremely complex pieces of software and the other thing is
abstraction like they are like masterpieces of abstraction how you
abstract if you think about it like
the hardware and the actual silicon and make it available like to people to at the end go there
and just browse a file system and see a picture so and i i understand that the reason that they
chose this term is not it's of course because it makes it easier like for people to understand
the scope of the product but i think that we are going to have a very interesting discussion about
also how they managed to build a very very complex system because interacting with all these different
cloud applications and creating an open and extended platform is something like extremely
complex and also it will be very interesting to see what kind of abstractions they manage to put
there in place in order to be able to interact with all these different data and different
services and create a very unified and smooth experience for the customer at the end.
So I'm very excited to see what Ivan has to say about the product and how they approach
the development of Slapdash.
I agree.
And I'm also going to ask them if we have time, how in the world they made searching
Google Drive files faster and better than Google did, because that's a pretty amazing
task.
So let's dive in and talk with Ivan and Lester from Slapdash.
Let's do it. Lester and and Lester from Slapdash. Let's do it.
Lester and Ivan, welcome to the Data Stack Show. We're really excited to have you.
Cool. Thank you.
Thank you.
I'd love to start out with just the quick story. I know we like to dig into the tech,
but just have a quick story on where the idea of Slapdash came from and kind of the journey you've been on and where
you're at today as a company. Cool. I can take a crack at that. So Slapdash, the kind of the
genesis of it came from working at a big company like Facebook, which is where I was before I left
to start Slapdash. And one of the unique parts about working for a company like that is there's
a dedicated team of about a hundred like that is there's a dedicated
team of about a hundred engineers that's focused on building productivity tools for the rest of
the company. And so once you kind of get acclimated to using these pretty unique tools, when you leave
Facebook, it feels like you're missing something. So effectively, what happened was as soon as I
left Facebook, I was interested in sort of building tooling. And immediately I ran into effectively this notion that I'm missing something. And what
made Facebook's internal stack so special is it was really focused on integrating the whole
information space. So if I had a question about what does a company know about a certain customer,
it was one search away. If I wanted to answer a question about a colleague, it was also search
away. So this notion of having an integrated information space for work was really compelling.
And as we started to sort of understand what it means to build something like this outside of
Facebook, it was very clear what we had to do is we had to connect the cloud applications that
people use on a day-to-day basis. And so, you know, after call it two years of prototyping,
building the team and actually
building the product, we find ourselves at a point where we can kind of see clearly what
we're trying to achieve with Slapdash.
And the way we try to frame it, the way we frame it today is what we're looking to achieve
is we're looking to cut down the amount of time that people spend controlling their computer
in relation to the time they spend expressing their intent.
And I can kind of illuminate that with a quick example. So today, for example, if you want to
file a issue on GitHub, you're opening a browser, you're navigating to repository, you're clicking
a new issue button, and then you're finally starting to type the title of the task. Now,
all that time that you just spent was effectively you just controlling the computer. The actual intent you had was to start writing that task title and task description.
So with Slapdash, we're trying to rethink the physics of what it means to use cloud applications,
restore some of the affordances that people lost when software transitioned from desktop to cloud,
and in the end, make people more productive with the tools they already use.
So that's kind of call it. I mean, I think there we can kind of dig into the kind of component parts.
And it's always a kind of interesting sort of lean into metaphors and similes to describe what Slapdash provides because it's a quite a new category.
But I'll pause there for any elucidation that I can offer.
Yeah, that's well, I mean, to me, it's fascinating. And I mean, I've been using the product for the past two months. And I had this moment where I, I guess I just had never thought
about the fact that computer input really hasn't changed in a significant way in a really long time.
I mean, you have gestures and some track pad type things
that have come out that are interesting
and I think helpful to some extent,
but it hasn't really changed that much.
And so to me, it was just great
because I thought, man, this is just a space
that is ripe for change.
Well, I'll kick off getting into the technical stuff
by asking kind of a leading question that will probably inspire Kostas to ask a bunch more
questions, but the app runs on your desktop. So it's this interesting paradigm of you're building
essentially an interface with cloud tools,
but it runs on a computer.
I know that from a technical standpoint,
it's changed a lot, you know, even from five years ago.
And you think about actually building software
that runs on a computer operating system,
but would love to know,
as you thought about building a service,
you know, as opposed to just a cloud web app,
but something that runs on someone's computer,
what were some of the things you had to think about
in terms of architecting your app
that you wouldn't necessarily face
just building sort of a normal, quote unquote,
cloud application that runs in the browser?
Totally.
So I think one of the things that's meaningful
about Slapdash is that we have this internal philosophy
where we like to say that we are client agnostic.
And what that means is we actually don't care where people use Slapdash.
You know, in some sense, Slapdash allows you to bring your own tools to the job, right?
So whatever project management tool you use will support it.
Whatever document editing will support it.
We're not trying to sort of displace those tools.
But as importantly is where you like to work.
So some people are very comfortable in the browser.
Some people are more comfortable in Slack.
And of course, everybody kind of brings their own operating system to bear.
So for us to sort of, because we're sort of very much focused on speed, one of the things
that we wanted to do is we wanted to bring Slapdash close to where people work.
So in other words, the client where it's actually deployed is kind of less material.
However, we think the desktop experience is the best way to experience Slapdash.
And the reason why is that to really kind of take a leap in terms of changing the physics
of how you work with cloud applications, you need to connect three layers.
Number one, you need to connect the data layer, which is the structure and what the contents
of all the applications you use.
Number two, you need to likely connect the browser layer,
because frankly, most of your work still happens in a browser.
And a third part is you need to bridge the desktop layer as well,
because ultimately, that is sort of the pain through which you interface with everything else,
including the browser.
And as far as sort of, so in terms of our architectural considerations,
one of the things that was important to us was to be able to have this really broad footprint.
To do that, we, of course, leaned into web technologies.
We actually spent a lot of time getting good at Electron, which is what's responsible for delivering the desktop app experience.
In that sense, and the other aspect of this, too, is thinking about the interaction model as well.
So in some sense, even though metaphorically we're building this operating system for work,
most operating systems can be reduced down to kind of a command line like interface.
And the neat part with how we're designing the interaction model is that, you know, we
can effectively invoke Slapdash through something as simple as Slack chat.
We can invoke it, you know, within sort of the Chrome location bar.
So that's kind of how we thought about it.
So in short, desktop technologies
with an emphasis on broad deployment
and heterogeneous deployment.
That's very interesting, Ivan.
Like I have a question
and I would like to start like from,
let's say going top to down
to the different layers that you mentioned earlier.
And I would like to ask you about the experience of introducing this command line experience on the desktop,
which for me, and I mean probably for every engineer out there who they need to use a command line in their everyday work,
it feels probably pretty natural
and personally i really enjoy it and it reminds me of because i'm also quite old so i remember
like starting with vi and just a terminal and then going to user interfaces and then going back
and then trying to merge these things together and i find this evolution like extremely interesting
but from your experience also with your users,
how do you see people adopting this new paradigm of command line
as part of the graphical user interface?
And what are the challenges?
And what do you find really exciting about it?
Especially for non-technical people, right?
Definitely.
Yeah, so I think what we're finding,
and once again, our thesis is about expressiveness,
right?
So in other words, choose the apps and engage with Slapdash where you work.
And so I think the other part of what we do is we have this sort of focus on ergonomics.
So this idea that different people have, call it different capacities, and they use the
computer in different ways. So in general,
on our end, we categorize people into two categories. There's the keyboard-first individuals
and the people that are more comfortable with a mouse. And we're sensitive to provide affordances
for both those people. Now, in terms of the kind of the major problem or the major sort of delta
where Slapdash really augments your workflow, The fastest way is of course, to learn our command line interface. So, and, but generally what happens actually is that that's
oftentimes, you know, the, the kind of the early adopters that are very keyboard centric,
they'll pick up Slapdash and they'll kind of fall in love with a, with a command bar,
which is what we call it. But then they'll find themselves discovering and using the rest of the
product, even kind of in more conventional ways.
So in other words, the command bar oftentimes acts as a sort of interesting entry point,
but plenty of people find kind of value in just having kind of this unified surface
for your cloud applications.
So I'm not sure if sort of that kind of touches upon specifically what you asked,
but I'm happy to clarify or dig in further.
Yeah. So, I mean, okay, I understand like this distinction and makes a lot of sense. you what you asked but i'm happy to to clarify or dig in further yeah so i mean okay i understand
like this distinction and makes a lot of sense and maybe it's also like a little bit early for
the product because it's still a new product but how do you educate like the people from a different
group to also adopt the other or you don't care about that like someone who's not keyboard centric
right because okay like working with a keyboard is
by definition more efficient right but is this something that you do as part of the product
something that you care about and if how how do you do that and what are the reactions of the
customers for that cool yeah so in generally speaking what we do is when we have an opportunity
to have we try to have kind of closer conversations
with our early customers. We try to have effectively directed onboardings. We call our
onboardings ergonomic fittings. And so rather than try to push a certain style of working on someone,
we try to discover how people work and try to match them up with a set of features and Slapdash
that would best benefit them. So we don't try to sort of effectively make kind of mouse people to keyboard people. I think that's going to
be an investment that we make kind of a little bit down the line in terms of
call it we have some really kind of call it interesting but unshipped ideas around
this but at the moment we're less interested at bridging the education gap
of more so fitting them to the features that they would benefit from. What about you, Eric? Are you a keyboard
or a mouse person? A keyboard all the way. I was actually, I did an ergonomic fitting
with Ivan and Lester and I am a heavy or actually was an extremely heavy Alfred user. So I don't know if any of our listeners use Alfred
for sort of their desktop
sort of keyboard workflow experience.
But yeah, I try and do,
I mean, let me put it this way.
I use Emacs keyboard shortcuts
and have remapped the caps lock key to be control
so that I can navigate around text documents
without having to touch the mouse
it's pretty bad but also it just gets the machine out of the way and sort of I mean actually I
haven't thought about it the way like you put it so eloquently Ivan but I someone showed me that
and I started doing that when writing and it was just such a better way to get to sort of remove the barrier
of my thoughts getting into like a text document, which is really the same sort of experience that
you're trying to create a slapdash, which is, which is interesting. Getting into some more of
the data components. One thing I'm really interested in is you, and I would love to know if this is even the way that you think about it, but as a user of the app and sort of looking at it from the outside, there seem to be two broad categories of sort of app type behavior related to users. So one would be the actual user behaviors, right? Which would drive product
analytics and, you know, other use cases like that, right? So invoking the commands, running
the commands, like the things that represent user actions when using Slapdash. And I have another
follow-up question there, but the other major category would be the data that's produced by
the commands themselves running,
because I would think that there's sort of diagnostic information around how Slapdash
interacts with other applications that is really critical, right? So failures or errors or other
things like that. Is that even the way that you think about data and would love to know,
how do you approach sort of collecting and using such sort of different
types of data? Totally. Yeah. So I think there's probably another pillar that's more, I guess,
at the top of mind for us as well, which is how we store the actual data as well, right? So at its
core, you know, Slapdash, what do we have? What do we do?
We solve a graph replication problem.
In other words, when we connect to an application like Asana or Monday, we'll kind of take the
structure of those applications and build one giant graph on Slapdash.
And so a lot of the interesting things we do are within that layer.
And of course, in terms of how do we then sort of focus on actually
building a better product, and that, of course, you know, the kind of the third party data stack
is where it really fits in. And I think your categories are pretty correct in some sense,
in terms of how we kind of segment our data syncs, if you will. We certainly have the traditional
kind of analytics. So we lean into Google Analytics and Amplitude to understand user behavior.
And as far as understanding, call it the kind of giving kind of our infrastructure observability and understanding how things are working, that we actually delegate to a product called Honeycomb,
which we're quite fond of. But everything else, we try to kind of do as much in our systems as
possible. We try to make it kind of do as much in our systems as possible.
We try to make it kind of effective.
They have a very portable infrastructure to have some optionality for, for deployment strategies.
What can I dig into from there?
What's, I'm not sure if I answer that to your question itself.
Yeah, no, that's super helpful.
And because, because I work with RutterSec, I have to ask, how do you, are you using like native SDKs from Amplitude just capture user behaviors or how is that instrumented and how are you getting that information to Google Analytics?
Totally. I mean, you might not like the answer, but we do use segments internally.
It's a great tool. that actually. And we built our own sort of endpoints just because we found a lot of issues in terms of a certain analytics event being dropped. So we had to build kind of a effectively
a lightweight proxy for a lot of the events. But otherwise it's been pretty sustainable.
And once again, our focus has been more on the internal data stack rather than
call it piping it out just based on sort of where we are as a company.
Very cool.
Got it.
Yeah, that's, yeah, that's super interesting.
Kostas, any, I know there are tons of questions brewing in your mind and I've kind of been dominating the conversation.
Ah, no, my, to be honest, like the question that I want to do, like from the first time
that I met with the team is about the experience of building an infrastructure that has to interact
with so many different services out there. I mean, I have my own experience here at Rutterstack and
what that probably means, but also before that with Glendo where we had to integrate with many
different services on the cloud. But I'm really impressed with Slapdust because Slapdust has like to, let's say, integrate with all these services in a much tighter way, right?
Like you have to pull the data, you have to allow to share this data, and then you have to also interact back and create actions and all that stuff.
So to me, at least, knowing the complexity of interacting with something, even with one service like Zendesk, for example,
or something like that,
I find it's extremely challenging
to build a service like this
and ensure the quality of the service that you're providing.
So yeah, how's the experience of doing that, Ivan?
And what kind of abstractions help you do that
at that scale at Slabdash?
Totally. And this is a really deep topic that I can talk about forever. Most of the credit here belongs to our CTO, Dima, who's an
amazing technologist. And really kind of the etymology of us being able to manage the complexity
of call it like synchronizing billions of things comes from our experience. So Deem and I both previously worked at Facebook.
And we had to build a kind of an analogous infrastructure there
and where the responsibility of the infrastructure was to,
effectively, the goal was to have a copy of every product in the world,
so things that are bought and sold.
So if Amazon has a product, we want to know the price,
all the photos, all the descriptions.
And so we built something similar there
where it was effectively an ingestion system. It had aspects of sort of this graph replication problem,
certainly had a scale problem in terms of has to be real time, has to be high throughput,
both on the read and write end. And so we had a kind of effectively an opportunity to build a V1
of this architecture there. And so when we started to build what we have today on Slapdash, one of the first things
that we solved was the data store itself. And so in terms of, I think it's always helpful
when the abstractions that you're working with really map to the domain nicely. And for us,
to reason about these applications as being effectively just graphs that were replicating to our end
was kind of a helpful abstraction. And so to support that, we built this graph database on
top of Postgres. What we have to solve there, of course, were issues around scale and data
isolation. And so of course, right now we can scale into the billions, kind of no problem.
And so that was kind of the first kind of problem that we had to solve to kind of manage
complexity.
It can be stored.
And then there was a synchronization part.
There, I think the real kind of killer tool is infrastructure kind of observability.
So we have so many, we have millions of events running through the system, but you really
want to have a really high signal feedback when things are going wrong or things are
breaking.
And for that, you that, internally at Facebook,
there was a product called Scuba.
And thankfully, when we left Facebook and we looked around,
there was an analog and a product called Honeycomb.
So that is actually one of the things
that allows us to sort of manage this,
call it these billions of events on a daily basis
and understand when things are healthy, when they are not.
And I think part of it too,
well, you know, one of our strategies was to launch early, launch early and have the product
be open. So we have more people trying to connect applications so we can kind of cover those edge
cases. And as a result, we've been kind of on effectively version three of our current ingestion
architecture. So I guess, and I think the, you know, so I'll pause there because I can
dig into any layer, you know, but yeah, it's been, it's been a, it's been a process. Let's put it
that way. Yeah. Yeah. So you said that you have actually built like kind of graph database on top
of Postgres. Is this correct? That's right. Yeah. And why did you choose Postgres to do that and
not just use a graph database like
Dgraph, for example, out of the box? I'm very interested because we did something similar,
let's say, at Rutherstack because we needed to build some kind of queuing system and stream the
data. And we also used Postgres for that. And I find it very interesting that Postgres is becoming
like the kind of database systems
that many interesting projects are built on top of that and use it as the underlying storage
system, like some parts of it to build something completely different than what Postgres was
intended to be doing.
So yeah, I would be really interested to hear the story behind this.
Yeah.
So I think we have this philosophy that we like to keep things super
vanilla. I personally had experience in the past where I adopted a tool that was a little bit more
nascent when trying to build something brand new. And what I learned from that experience is that
you end up fighting the tooling or the lack of tooling more than you do in terms of making
progress against the problem. And so when possible, we try to use kind of the most tried
and tested pieces of software when possible.
And to be fair, we borrowed that sort of approach
from Facebook, which also has a graph database,
which in practice is built on top of MySQL.
And we chose Postgres just because at this point
it has better qualities and better features than MySQL when we evaluated it.
But it was more so about the maturity and the tooling and also seeing an architecture that was really successful that was built in a similar way.
Yeah, that's great. I mean, that's pretty much aligned to also the reasons that we decided that rather like to do something similar another question about the graph database why did you choose a
graph model to model your data what are like the advantages compared to the
domain the problem domain that you are interacting with and why this is like
for you at least slap does like the best way to represent data and relationships and
entities, what kind of added value it gives from a product perspective and the technology
perspective?
Yeah.
So one thing that is distinct about Slapdash, one of our big focuses is speed.
And effectively, one of the ways that we get to the, call it the fast experience that we have today
is that we have effectively abstractions that fit each other from call it the storage layer
all the way to the client side. So we have the graph database. We have a database access layer
with graph-like semantics, which maps almost directly to our GraphQL API.
And of course we have,
as most modern applications are today,
the client side is expressed in some form of JavaScript
with kind of a GraphQL as the main sort of
transport mechanism.
So by virtue of effectively,
it turns out that by,
there's certain amounts of, for example,
optimization techniques available to us
that are really natural with a GraphQL retrieval model.
So one of the things that we were able to build, and it's kind of very helpful in terms of the graph database, is we can perform, do things like batching and coalescing in much more natural ways.
But in practice, you know, the graph data structure is just really versatile.
I mean, maybe the alternative is where we kind of started with something more tree-like, but the reality is that when you look at sort of
kind of the world and you try to sort of model the world, it turns out that the graph is almost
kind of like irreducible. And a lot of the sort of habits around sort of like looking at the world
as a graph actually once again comes from from working at a company like facebook which literally looks every at every problem as a graph problem so it was already
quite natural and then when we try to apply call it the data model of these applications it just it
just continued to work and so we're we kind of ended up kind of a lot of it was kind of discovered
in some sense like you know in terms of what we can do and what we can achieve but at the moment
it's like after having built a lot of the system, it continues to
sort of offer us a pit of success in terms of our infrastructure and engineering problems.
Right.
So this is the way that you structure and represent the data that you're pulling from
these services, right?
Like you create some graphs there.
The interactions are also part of this graph model or is like a separate system
that just interacts with that to figure out what to do.
Like how do you,
how have you architected like these different,
these two different phases of the product?
One, which is the interactions,
which is more like,
let's say the extroverted side of the product.
You have to interact with the outside world.
And the data model is the way that you pull the data
and you model them to make the people
able to interact with the data from inside Slabdust. So what's the relationship between
these two? Yeah, so there are, so when we actually started to build this, it was important for us to
sort of not just think of this as a read-only platform. So when we do think of, let's say,
nodes in a graph, so what is a node in our graph? So a project is a node in our graph,
a task is a node in our graph, a document is a node in our graph. We think about not only sort
of effectively what are the relationships between this node and other things, like what's the
relationship between this folder and the items inside it, but we also think about the properties
of the node. So in other words, we will understand, for example, certain types of nodes have the
ability to, for example, to close a task.
They're effectively modeled as a task and you can close it.
Or they have this property where you can rename it.
So part of it is sort of, you know, discovering kind of the common denominator and really kind of operating with these nodes where, which have, in some sense, always kind of a variable amount of properties associated with them, you know, the different capabilities.
So that's kind of fits into that worldview.
I think when we're talking about kind of commands in terms of, and is there sort of like a formal
way to sort of like a reason about those things, you know, at the moment they are kind of like
live in a higher sort of like plane of abstraction and maybe they're less sort of like related
to the kind of the graph kind of itself.
They're mostly kind of call it adding new objects to the graph, but the, I think the graph semantics are, are less important there right now, but we're always thinking about sort of formalism and reduction and really reasoning about our system, you know, with a, with a rigor of, let's say like a file system API.
But again, I think we, you know, I think, I think a lot of what we have today is, is certainly emergent and based on having tried to build it.
It's very interesting. Very, very interesting.
So as a customer, like as a user of Slabdas, what kind of consistency should I expect between how I interact with the data from within Slabdas or interacting directly with the service? Like what kind of latencies there are like between syncing the services
and are there like issues that you have seen there
and anything like interesting from this problem
that you have encountered
and found like an interesting solution to that?
Yeah, that's actually probably, you know,
we have a lot of problems that are evergreen
and I think that is one of them.
I think depending on the integration,
it varies in terms of fidelity.
So certain integrations like GitHub and Google Drive, they're going to be very close to real
time without much effort.
But the way we approach the problem, we take different tacks at it.
In other words, we try to be excellent from strictly a server-side integration.
However, if you have a browser extension installed, we use that to kind of augment the graph as well.
So the idea being that if, let's say you connect Asana, which has like a five-minute sort of sync window, but in that timeframe, you, let's say, create a new task, it still should be possible for that task to appear directly and immediately in Slapdash because you have our Chrome extension installed. In other words, we kind of take a
multi-pronged approach, but we always opt to effectively lead with a server-side integration.
I don't think we're done on this front. And I think we have a couple of kind of creative
augmentations that we're going to be releasing to kind of keep it closer to real time. But that's
just the nature of what we do is trying to build this kind of real-time sync layer, which is not available by default, of course.
I'm interested to know because, and, and, you know, it's,
it's interesting to think about how this has changed with access to the
internet, but do you do,
do you face any issues around internet speed, right?
Because when you're sort of interacting with a cloud or
people are, you know, internet speed affects your ability to use cloud services in general.
I'm interested to know how you think about that problem because in some ways, especially if you
download the desktop version of Slapdash, you sort of expect, you know, almost immediate
response, even though if you step back, you realize that you're interfacing with cloud
tools.
So I would love to know about any sort of challenges or interesting things on that front.
Yeah, I mean, I think this is one of the things that is really important to us, this notion
of delivering the experience with minimal latency.
There's areas where we're excellent and there's areas that we're still improving.
But the idea for us is to always bring it down to as close as zero latency as possible, right?
I mean, look, like when you're browsing the files in your computer,
you're not waiting for the folders to load, right?
And so we want to be able to deliver that type of experience, that fidelity of experience.
And what are we thinking in terms of how we can bridge the gap? So number one, of course, our infrastructure and architecture
is a big part of it in terms of minimizing latency. One of my favorite tricks that we have
within the product is that anytime you hover over, let's say, a link in the product or even an option
in the command bar, we always try to anticipate that you will be kind of effectively hitting that link,
so we preload it.
So in practice, the neat part about this is that what that allows us to do
is it cuts about 50 milliseconds of perceived latency.
And if our server response is within that timeframe,
we achieve effectively on balance a zero latency experience.
So that's one of the things that we do.
The other thing that we're exploring, there's two other things, is different, number one, different deployment
models. So certain customers have stricter requirements around sort of call it data
isolation. And we want to be able to deploy Slapdash outside of our sort of call it like
our data center. And so what we would be offering is that when you actually do outfit,
let's say Slapdash for your company
would be offering effectively geographic selection
of where you want Slapdash to run.
So once again, cut down on that latency.
And I think the other sort of thing
that we'll be reaching for, which is meaningful,
is some degree of an offline architecture as well.
So you shouldn't have to be connected to the internet
to start getting kind of immediate responses. There might be sort of an intermediate call it offline cache that will
kind of replicate over at some point. So that's kind of, it's deeply important to us. I still
think we're, you know, getting good at it. And I think that's really what we want to cut out of the
call it the experience of working with cloud application is just like literally waiting for things to load.
And I'll mention one last, my favorite thing too,
that we do as well,
that's actually going to be featured more prominently
in the next release,
is anytime that we detect that, let's say,
you have something open in a tab already,
we'll try to recycle that tab.
So in the case of Google Docs,
that's like five to 10 seconds of call it,
like, you know, loading cut out. But yeah, that's a broad to 10 seconds of, call it, loading cut out.
But yeah, that's a broad spread.
It's a problem that we always return to.
Sure, sure.
And I know we're getting close to time here.
A couple more questions.
So in terms of search, I would love to know how you have approached this search issue and I'm
coming from the perspective of using slapdash really just for search in
general for cloud stuff but one thing in particular is searching Google Drive and
it became immediately noticeable to me when I started
using Slapdash. I'm not trying to make this a commercial, but it's just from an engineering
perspective. I'm very interested in this, but searching file, Google Drive files via Slapdash
is better, is a better experience. I'm not going to say the search is better necessarily,
just because I don't have enough technical knowledge to understand the mechanics of that.
And Google knows a thing or two about search, but the experience is amazing. So, and then you're
sort of, you sort of create that experience both individually, I noticed it very acutely with Drive,
but then you have that across cloud tools. So how did you go about thinking
about how to build that? And from a technical perspective, is there tooling or other stuff
that goes into that? Yeah. So I think in general, when we think about search, we actually don't
think about search in particular. We think about information retrieval and thinking about how
people retrieve information in a work setting. So in other words, I think with any sort of
kind of problem solving, I think, you know,
you kind of have to discover the constraints around it.
And, you know, so for us, and I think number one,
search is kind of like one part of kind of the toolkit.
And I should also mention that like, you know,
what Google is good at is different from what,
let's say, search is in Google Drive. So what Google is good at is this
needle in a haystack search across a vast unknown information space. Whereas, you know, what Slapdash
is focusing on, and it's kind of, it's actually finding information in a roughly known information
space, which is your work environment, right? Sure. So for example, so you already have
strategies there. You might have, you know, so how do people actually find information? Actually,
not often by, there's like some hot pass with another keyword or the file name. And if you've
got to get in the habit, it's a really fast way to do it. But the other things that we try to unlock
is effectively mapping to people's natural information retrieval strategies, which are
based on things like landmarks. Like for example, like I know that this lives in this folder in Google Drive, or I know that Lester was working on something. And so
being able to sort of express these very quickly across the kind of cloud applications is really
where we kind of call it get the delta. And that's sort of the root problem of information retrieval.
And as far as why we're faster on search, it's once again, it comes back to the architecture.
I think when you start from sort of like your main goal being, hey, let's make this thing
be super fast, then I think it was kind of a revelation for us too, frankly, that we're
able to be like 10 times faster than Google.
And maybe there's that little bit of emotional resonance too, right?
When you experience fast piece of software, you generally kind of, it feels better than
when you're waiting.
Sure. Yeah. Sure. Yeah. That's fascinating. fast piece of software, you, you, you generally kind of, it feels better than when you're waiting.
Yeah. Sure. Yeah. That's, that's fascinating. Well, we're close on time here. So last question,
unless Kasa says anything else, but you know, it's, it's amazing to hear how visionary and thoughtful you are about what you're trying to build. And the operating system for work
is a very aspirational tagline.
I would love to know what kind of features
do you see sort of far down the roadmap
or maybe not even features,
but what sorts of functionality do you envision people
and teams being able to use Slapdash for that maybe are things that you've
thought about or conceiving of that, you know, sort of don't necessarily register with the
average user who's still just accessing everything through the browser? Yeah, you know, I think that's
a kind of a good question. You know, we usually keep things pretty close to heart in terms of our kind of broader plans.
But I think we do have some things on the immediate roadmap.
I think the one thing that I think you will find that we'll be focusing on that's quite important.
And we recently kind of launched the product on Hacker News.
And so we mentioned it there.
But we're interested in people extending Slapdash, right?
We are interested in Slapdash
being kind of a programmable surface. So we'll do our best job of kind of augmenting kind of
your existing workflows, building the integrations, building the commands, but we're also really
excited to open up some of the same tools that we have for people to be able to kind of express
new things on top as well. So I think that's kind of the thing
that I'm kind of most excited about
that's coming very soon.
Very cool.
I'm personally very excited about that.
That's awesome.
Costas, any questions before we close out the conversation?
There is one question.
So Ivan, you said about opening up the platform
and actually turning the product into a platform.
Is this something that you're going to do, like, by exposing an API
or there is also some kind of open source initiative that will happen?
I think both.
I think, look, we're a team of engineers.
We care about, you know, we rely on open source software to build Slapdash.
So we want to be able to give back.
And so I think we're trying to figure out, you know, the right parts of our stack to open source software to build Slapdash. So we want to be able to give back. And so I think we're trying to figure out
the right parts of our stack to open source
just to be able to, I mean, I think it's just more interesting.
It's more fun to hack on open source things, frankly.
And of course, there will be an API and a platform,
kind of more traditional API that you would expect.
That's interesting.
I'm really looking forward to play with it.
So yeah, that's all from me, Eric. product. But, you know, as we see over and over on the show, just tend to be pretty complicated
underneath the hood. So thanks for sharing all the inside information with us and best of luck
as you continue to build the company. Thank you so much. Thanks for being an early customer.
Wow, that was quite interesting. What do you think, Eric? I think the discussion we had with
Ivan, it was extremely detailed.
And I think it's very fascinating to hear what these guys managed to build and how they interact with all this different data
and how obsessed they are with anything that has to do with performance
and latency in order to deliver the best possible experience.
What are your most interesting parts of the conversation we had?
Yeah, the thing that really stuck out to me,
and this is both specific to Slapdash,
but also something that I think you see as somewhat of a pattern,
and that is it stuck out with me.
I got to ask my question about Google Drive, which I was very excited about. And it really struck me after listening to the answer that if you can step back and evaluate a problem and build a solution from the ground up, as opposed to applying part of another solution or sort of
taking an existing solution and trying to retrofit it, that you can do some pretty powerful things.
And I think that that's the way that they've seemed to approach most of the things that
they've done in the product, which is just fascinating. And you can see how much thought
that they put into the things
they build just from the way that that they talk about it yeah and something else that i found very
interesting and it's the second time on this show that we hear about it is about the importance of
going vanilla when it has to do with technology if you hear like ivan he was like and this is i
think like a sign of engineering
maturity that people have when they have like to build like very complex and important systems.
So yeah, that's another thing that I found it very fascinating that even like with something
so state of the art as what these guys at Slapdas are doing and are building right now,
and the sophistication of the product itself, that it is built on some very fundamental technologies
just like Postgres,
which exists out there for more than 20 years now.
So that's another very interesting point
of this conversation that we had.
Yeah, I'll be interested to see
if we hear about more companies
that sort of take the vanilla or the boring approach
with certain parts of their stack, you know, that sort of take the vanilla or the boring approach with certain parts
of their stack you know to sort of manage complexity which seems to be a pattern that's
emerging on the show so well that was a great conversation we'll touch base with them maybe
later in the year and see where they're at and join us next time for another episode of the
data stack show yeah i'm really looking forward to it.