Screaming in the Cloud - Episode 13: Serverlessly Storing my Dad Jokes in a Dadabase
Episode Date: June 6, 2018Aurora, from Amazon Web Services (AWS), is a MySQL-compatible service for complex database structures. It offers capabilities and opportunities. But with Aurora, you’re putting a lot of tru...st in AWS to “just work” in ways not traditional to relational database services (RDS). David Torgerson, Principal DevOps Engineer at Lucidchart, is a mystery wrapped in an enigma and virtually impossible to Google. He shares Lucidchart’s experience with migrating away from a traditional RDS to Aurora to free up developer time. Some of the highlights of the show include: Trade off of making someone else partially responsible for keeping your site up Lucidchart’s overall database costs decreased 25% after switching to Aurora Aurora unknowns: What is an I/Op in Aurora? When you write one piece of data, does it count as six I/Ops? Multi-master Aurora is coming for failover time and disaster recovery purposes Aurora drawbacks: No dedicated DevOps, increased failover time, and misleading performance speed Providers offer ways to simplify your business processes, but not ways to get out of using their products due to vendor and platform lock-in Lucidchart is skeptical about Aurora Serverless; will use or not depending on performance Links: Corey's architecture diagram on AWS Lucidchart Lucidchart’s Data Migration to Amazon Aurora Preview of Amazon Aurora Multi-master Sign Up This is My Architecture re:Invent Digital Ocean .
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This week's episode of Screaming in the Cloud is generously sponsored
by DigitalOcean. I would argue that every cloud platform out there biases for different things.
Some bias for having every feature you could possibly want offered as a managed service at
varying degrees of maturity. Others bias for, hey, we heard there's some money to be made in the cloud space. Can you give us some of it?
DigitalOcean biases for neither. To me, they optimize for simplicity. I polled some friends of mine who are avid DigitalOcean supporters about why they're using it for various things,
and they all said more or less the same thing. Other offerings have a bunch of shenanigans,
root access and IP addresses.
DigitalOcean makes it all simple.
In 60 seconds, you have root access to a Linux box with an IP.
That's a direct quote, albeit with profanity about other providers taken out.
DigitalOcean also offers fixed price offerings. You always know what you're going to wind up paying this month,
so you don't wind up having a minor heart issue when the bill comes in.
Their services are also understandable without spending three months going to cloud school.
You don't have to worry about going very deep to understand what you're doing.
It's click button or make an API call and you receive a cloud resource.
They also include very understandable monitoring and alerting.
And lastly, they're not
exactly what I would call small time. Over 150,000 businesses are using them today. So go ahead and
give them a try. Visit do.co slash screaming, and they'll give you a free $100 credit to try it out.
That's do.co slash screaming. Thanks again to DigitalOcean for their support of Screaming in the Cloud.
Welcome to Screaming in the Cloud.
My name's Corey Quinn.
I'm joined this week by David Torgerson, who works for Lucidchart.
David came to my notice when he was featured on the This Is My Architecture video that AWS featured relatively recently, talking entirely about Lucidchart's
migration away from traditional RDS into Aurora. Welcome to the show, David.
Thank you for having me. I'm excited to be here.
Glad you could make it. So let's start at the very beginning.
What is Aurora for those who haven't encountered this in the wild before?
Yeah. So Aurora is a MySQL-compatible, built-from-the-ground-up AWS service.
What the Aurora team did is they looked at some of the challenges that faced a traditional RDS
or a traditional database and tried to identify those challenges and make them
a push-button solution for managing a really complex database structure. Gotcha. So what was it that inspired you folks to more or less wake up one day and say,
well, our database is awesome, but you know what we'd really like to do?
Throw the whole thing away and replace it with something else. I mean, that's not something that
people tend to test on a lark of, oh, I'll try wearing dress shoes this week. No, it tends to
be something that requires a bit more thought and, I guess, concern going into it. What was it that motivated you folks to
go down that path? That's a great question. And I appreciate the way that you put that,
throw away our existing awesome database solution for something else. Really, what it comes down to
is we're a development shop, and our engineers make us a lot of happy customers just by making
awesome products. Anytime that we take away from them working on our products is opportunity lost.
In Lucidchart, we actually maintain our ops and DevOps teams comprised of our development
organizations. So it becomes a secondary responsibility to their primary responsibility, which is to write code and implement awesome features and products.
Our database solution was nothing unique. We were running with a master-master MySQL
implementation, and it worked really well, but it came with the challenges of having to have
a skill set that typically is not something that most people just know.
It's not something that they're typically exposed to.
So there was a lot of overhead in maintaining our database solution.
And then beyond that, besides the skill set,
there's just the complexities of always monitoring the capacity
of the reads and of the writes
and making sure that our underlying disk architecture
was performing enough,
designing and implementing solutions that would handle zone failures or disk failures, etc.
It just took a lot of our time. So we had wanted to move away from managing databases ourselves
for quite a while. And we actually looked at quite a few managed services and none of them really met
the core requirement that we had, which was free up our developer time. So there were services out
there, even paid for services and different database architectures entirely that would have
met the needs, but it wouldn't have decreased the amount of management time that our operational
group, which was again, comprised of our developers, it just wouldn't decrease the amount of management time that our operational group, which was, again, comprised of our developers, it just wouldn't decrease the amount of time that they'd have to spend on it.
And that's one of the primary things that we're looking for is to keep the existing
awesome implementation that we had, but also freeing up our engineers to work on features.
Gotcha. Believe me, I am a firm believer in the idea of taking the undifferentiated heavy lifting
and throwing that to a platform provider.
But the double-edged sword of that is, to some extent,
it's an incredible amount of trust that you're putting in AWS
to manage a database for you.
In the sense where, historically,
if something breaks in a more traditional setup,
you may not be able to fix it as quickly,
but there's a certain everyone is frantically typing
and trying to get things up since wherever it looks like we're busy versus when there's an
aurora outage more or less all you can do is open a support ticket and frantically refresh the status
page spoiler it's still going to say everything's green so there is that trade-off in the sense of making someone else partially
responsible for keeping your site up. Was that a concern? Absolutely. So in our implementation,
we considered and we put a lot of time and effort into designing our databases so that they were
resilient to any point of failure. So we could lose an instance or EBS volumes, even lose a zone or two and still have our master-master database implementation
working. So the thought of not only giving that up, but also handing that over to another
organization or another solution to manage for us was a bit intimidating. And it was something that went heavily into the evaluation
to make sure that we were not going to be trading the management time for a degraded database
solution or something that was going to be far less stable than what we currently had. And those
were the requirements that we went in open-eyed with that we wanted to maintain that. I remember at reInvent several years ago,
the comment was made during one of the keynotes that 80% of the Fortune 500 companies had started
evaluating or were using Aurora. And all statistics are made up, but that was a pretty
impressive statistic. And it really started making us think that Aurora may be possible to handle our
production load and keep the uptime that we had grown accustomed to. While we're on the topic of,
shall we say, fear about the unknown, one of the quotes from your blog, I'm not going to read the
entire post by a landslide, but the sentence that stuck out to me as someone who focuses on the economics of cloud
is, without calculating in the engineering and opportunity costs, our overall database costs
decreased roughly 25% after switching to Aurora. Can you talk me through that? Because one of the
challenges I've seen with uptake and adoption of Aurora is that unlike other database options, it charges,
I believe, 20 cents per million IO operations. So most people don't have statistics around what
that looks like. So it's a big question mark. Most people I talk to expect their cost to increase.
You're showing the exact opposite. Yeah. And it may partly be because of how we had implemented with the redundancy that we wanted to, well, that we had and wanted to keep. So in the blog post, and I'll just briefly go through this, we had two master master nodes handling the rights of our database instances. The read capacity came from having additional replicas that were pointing at those masters.
And we would have to scale the reads based off of BI requirements or business intelligent requirements for backup purposes, etc.
Now, each of those instances had to have the same disk throughput capabilities as the other instances, simply because our application is very write heavy.
And in fact, that's the majority of
the traffic that we're sending. So the read capacity had to also be at a minimum as big as
the write capacity just to be able to handle it. So all of our instances share the same read write
IOPS per second, which meant that we were using provisioned IOPS and that already charged a
certain portion per IOPS. Bring money is the short version there.
Yes.
Yeah.
So the 20 to 25% savings really did not come from the underlying disk usage.
So that represented a small part of it.
The IOPS were slightly more expensive when moving to Aurora than a traditional EBS volume.
However, we were able to decrease the number of instances that we had,
which means that we were able to decrease the copies of the data significantly.
So in a traditional cluster that we were managing,
we would have five instances as part of that cluster.
And that would mean that every write would end up with multiple IOPS across five instances.
So even though Amazon charges slightly less than double,
we'll call it double per IOPS for Aurora,
that overall decreased by three times
simply because we didn't have to have five IOPS per cluster.
We only have to have one, but that IOPS cost twice as much.
So our disk costs did
decrease slightly. But again, it was because we weren't comparing a single instance to Aurora.
We were comparing an entire cluster and what that cluster represented from both a read-write
capacity to an Aurora cluster of equal capacity. So again, the disk volume, though, or the disk
sizes in IOPS isn't where we saw the big savings.
The big savings came from our snapshots.
And the reason for that is when we managed SQL ourselves, we would encrypt everything their data, often they would write the same document over and over,
and they would only be changing a small portion of it.
So consider opening up a Lucidchart diagram and having a process block that has the word the in it.
If you change that word the to and and then save it,
that row is going to be updated at the database,
but there's only going to be a
small part of that row that's going to change, just the word the to and. Now, from Lux perspective,
and from a security perspective, it's going to rewrite that data, but it's going to look
completely different on disk. So when we would take our snapshots, even though EBS snapshots
are incremental, it would end up snapshotting a lot of the data that was duplicated.
Now, we were, well, we are very aggressive on our snapshots, at least hourly. And we keep those for
an extended period of time. At one point, our snapshot costs represented over 50% of our bill.
Now, we've grown past that. And that's no longer the case. We have other workloads that
are much larger. But the snapshot cost is where we saw the majority of the savings.
So again, not to discount the Aurora cluster savings, because that also went into it, but the majority of the savings came from our snapshots.
Gotcha. Did you know going into it what the savings were going to look like?
Or was it one of those, well, we'll do it as a trial and
then see what bears it out? Because what you're saying makes perfect sense, but it sounds like
the sort of thing that would be very hazy and nebulous until you'd seen it work.
Yeah, and that's absolutely correct. One of the challenges that we had with moving to Aurora
is some of the unknowns that were difficult to quantify or to test. And specifically,
the one that you're asking about, to test. And specifically, the one
that you're asking about, which is how much is it actually going to cost us, wasn't something that
we were able to fully identify until we had made the switch. Now, here's a couple reasons why. So
we had, by the time we decided that cost was coming into play, had already done the due
diligence to make sure that the solution would meet our needs from a legal, from a contractual, from just our own desires of maintaining a highly
available, highly manageable solution. But by the time that cost came around, we were trying to
make guesses as to how much EBS would cost or the disk storage would cost.
We were trying to make guesses about how much IOPS would cost and snapshots would cost. We were trying to make guesses about how much IOPS would cost and snapshots would cost.
And Aurora does publish that. However, some of the things that are a bit unclear is what is an
IOPS in Aurora? So we know that Aurora is comprised of at least six disks across three availability
zones. So when we save one piece of data, is that one piece of data counted six times? Or is that
one piece of data counted one time, even though it's across six disks and duplicated six times?
That was not clear to us. Going along that same logic, considering IOPS, when I write one piece
of data, is that one piece of data one IOPS? Or is it because across six disks, does that one write
count as six IOPs? So those were some of the unknowns that we had moving to Aurora. And while
we haven't officially found any documentation, what we have seen according to our AWS bill
is that we are only charged one time for the data, not six times, even though it's duplicated six
times. We're only charged one time for that IOP, not six times, even though it's duplicated six times. We're
only charged one time for that IOP, not six times for each of the six disks.
That's good to know and was going to be one of my next questions. It's sometimes interesting
to understand what you're going to see before it shows up on the bill and then it more or less
presents as complete fiction. It's difficult to tie back to what you're seeing in reality.
Absolutely.
So something else that was teased at reInvent last year
is the idea of multi-master Aurora.
There's apparently a preview that you can sign up for today
to get multi-master within a region.
With multi-master multi-region,
they were committed to having it out by the end of 2018,
if you're going to believe the announcements at reInvent. Unfortunately, I think Dr. Vogels is
still giving his reInvent keynote, and it hasn't ended yet. So we don't know how that's going to
work from a time perspective. But with what it has available today for multi-master within a region and then later multi-master and multi-region for writes, is that of interest to your use case or is that something that's nice to have?
Absolutely.
So both.
It's interesting to us and it's nice to have.
Obviously, you can design a really well-architected solution using a master replica solution. The reason that we are very excited and
interested in the master-master solution that Aurora is touting is that currently the way that
a failover works in Aurora is that it has to take the cluster and put it in read-only mode so that
it can ensure that replication has caught up
and that there's no data collisions or data loss, et cetera.
So when that cluster is in read-only mode,
any write that attempts to go against that cluster is either blocked or hung
until the cluster comes out of read-only mode.
In theory, once they implement master-master across region or across availability zones,
that failover time can decrease significantly because you're not going to have to put the
cluster in a read-only mode in order to initiate a failover.
You should just simply be able to move the connections from one master node to another
master node.
Now, the reason that we're really excited about multi-region master master
is for disaster recovery purposes. So today we have replicas that are real-time replicas
pointing at our production cluster, but those replicas are in a different region.
Now, in the event of having to do an actual region failover, we would stand up an entire application stack in the second region.
The data would be up to date and we'd be able to point at it.
However, as soon as we issued that first write, so as soon as the first document save came into the new cluster in that second region,
the master cluster in our primary region would be out of sync,
meaning it would no longer be a true read replica
because there would not be an easy way to get those changes back into our primary region.
With master-master cross-region,
it opens up the possibility of being able to do a site failover,
but more importantly, to be able to fail back to the previous region without losing data.
Which makes perfect sense.
I mean, you tell a fantastic story about the capability
and how this could wind up informing application architecture.
I mean, I'm halfway to planning out an Aurora migration myself
before I stopped to realize, oh, wait, I don't have a database.
So that does become something of a challenge from my perspective.
So let's look at the other side for a second. I understand that blog posts and videos on behalf of AWS are inherently marketing exercises to a point, but your review is glowing. Were there any drawbacks to Aurora that you uncovered?
There were a few, and I didn't put these in the blog post, but I'm completely happy to talk about them.
Oh, good. It's time to dish.
And this one's a little bit silly, but it is something that we're taking seriously.
When we managed our database solutions, we were really, I don't want to say pushing able to have master master with scalable slaves, single digit second failovers and detecting of failures, etc.
After moving to Aurora, we don't have to maintain that.
We don't have to worry about EBS volumes or pointing a replica back to a master's replication point.
We just don't have to deal with that.
And some of that knowledge and expertise
is becoming a little lost among our developers
who traditionally were very sharp
when it came to database management.
So again, Aurora makes things so easy
that we're getting a little stagnant in our skill set.
That's one of the drawbacks.
But again, that's probably also a pro for Aurora
is you don't have to know anything to use it. Well, on that path then, is there a viable
exodus strategy if one day you decide, you know, Aurora is great, but we're going to migrate to,
I don't know, fill in the blank here. Is there a path to get there without significantly
re-architecting your entire application? Yeah. And in fact, this is something that we could do today. The only drawback is that people are
rusty on their skillset of how to do it. But Aurora truly is SQL compatible, both being a
replica and a master. Now, the reason that that's really cool is I can stand up a MySQL instance or a Percona instance or any other fork of MySQL,
and I can take Aurora and point it as a replica to that new MySQL master, and Aurora will act
just like any other replica would. We've actually done this several times in testing, in moving data around. We've also done it with two Aurora instances
with a manually stood up MySQL instance
part of that cluster as well.
So because Aurora truly is MySQL compatible,
the replication portion of it is no different
than what you would do if you just installed MySQL
from MySQL.org.
It makes migration, it makes pulling data out,
adding data to Aurora incredibly easy. Okay. That's certainly something that I think a lot of companies want to have in their back pocket. When a cloud provider comes out with an exciting
new technology that you can use to make your application tremendously simpler.
That's compelling, and people want to embrace it intrinsically. The counter-argument against that,
in my experience, has always been one where, okay, what if we don't like it anymore? Or what if we
deem it doesn't quite meet our needs? How do we get out of it? And when there isn't a viable answer
to that for any, and this is not specific to one provider, this is not me calling anyone out
without naming them. All providers do this to some extent. Vendor lock-in is a concern, but even more
so than that is platform lock-in, where there's a particular service that's being provided that
there is no viable alternative for.
And when you start seeing some of what they're teasing with being able to go multi-master across multiple regions, there isn't a terrific answer to that in many scenarios other than Aurora itself.
I mean, there are other cloud providers that offer similar global databases,
but the semantics are often very
different. The tolerances, the failure modes, and the latencies all tend to manifest in a very
different application profile. So it seems to me, at least on some level, like there's still going
to be an area of concern for companies looking at things that are sufficiently differentiated.
Is that a fair assessment? Absolutely. And it's something that we're taking into consideration for companies looking at things that are sufficiently differentiated.
Is that a fair assessment?
Absolutely.
And it's something that we're taking into consideration as well.
And you bring up an excellent point that as soon as you go to master-master cross-region, that is certainly something that you can do with a traditional MySQL cluster.
But the latencies on ensuring that those writes happen across both masters across regions
is something that would be very difficult to replicate just due to the latency.
You're dealing with things that can't be improved on,
which is distance over network.
So that is something that's concerning.
However, Aurora does have the capability of exporting data with a MySQL dump.
So if you're willing to give up, well, first off, we're speaking about hypotheticals.
Amazon hasn't released anything yet, right?
So assuming that it works the way that they've announced,
meaning that you can actually have an application that guarantees
that you're not going to have replication issues,
you're not going to have to skip replication errors, then that could be a real awesome solution for maintaining disaster
recoverability. So if somebody switches to that, and then they decide they want to go back to
a traditional MySQL self-managed solution, that would be a big feature set that they would be giving up to go
back to something that is self-hosted. So I fully believe that Aurora is going to continue to be
SQL compatible, just for the ease of being able to move data to it, being able to move out of it.
But with MasterMaster, with MasterMaster cross-region especially, they're starting to
enter into scenarios where that is something that not only is managed by Amazon, but it's something that you can't do on your own. and overnight deploy it to a completely different provider. And I get that that's a compelling story and it feels good,
but it just isn't realistic as soon as you deviate from building everything yourself,
more or less out of popsicle sticks.
So to that end, was there anything else you saw about Aurora that wasn't,
I guess, all it could have been or drawbacks to it that they may not advertise in giant signs at large conferences?
You know, that's an interesting question. So one of the claims that they have is that there's up
to a 5x performance increase. While we did see a performance increase, it wasn't in the sense
that all of our queries were all of a sudden faster. It was more that the capacity that each
cluster had increased significantly.
So if you're looking at a single serial set of operations, the performance probably is not going to change very much.
Some queries are going to be faster. Others are going to be slower.
The big benefit that we saw was that the capacity increased significantly.
We were able to get much higher throughput than what we had prior. So that was one thing that was, I don't want to say misleading,
but obviously it was a quote that had some qualifications around it
that weren't printed.
Okay. And have you seen anything as well about other areas?
For example, failover time, scalability, anything in that sense
that has been a bit of a regression from where you were before?
Yeah. So when we were managing our master-master clusters,
we had optimized for the idea that failures would happen.
We went into that with eyes wide open.
We fully expected to lose a database instance a week
is what we designed the application for.
Now, that didn't always happen,
but because we had that design up front,
it meant that we were able to detect and recover
from an EBS failure, a zone failure, a master instance failure, etc. within 15 seconds as a max,
but often it was less than three seconds. With Aurora, because of how failover currently works,
once an error is detected or once you initiate a failover to a second region, Aurora will put the cluster in read-only mode.
And that read-only mode will stay that way until replication is caught up and until the Aurora cluster can confirm that it's safe to actually move that master functionality to one of the other replicas in the cluster. Now, once that's moved,
Aurora updates a DNS record
to point to that new master node.
That DNS record is replicated via Route 53.
Now, making an API call to Route 53
is near instant on getting an updated record.
However, making a DNS call to Route 53,
you're bound by the TTL for that record.
And by default, Aurora TTLs are 60 seconds.
So when moving to Aurora, we went from 3 to 15 seconds of downtime during a failover or a migration up to possibly a minute and a half if you include the read-only time as well as the database failover. Now,
for one node, if you're doing database maintenance, say, monthly, that's a minute and a half monthly.
Most of us, even at a 99.99% plus, can handle a minute and a half of scheduled maintenance.
However, if you have many clusters, that minute and a half starts to add up pretty significantly. So one of the things that we
did to get around that was we actually make a caching DNS call by pointing at Route 53 APIs
directly to pull down the updated IP addresses much more frequently than what we would get from
TTLs. After having done that, our failover times with Aurora are typically in the 30 second range.
Now, 30 seconds is still significantly more than the three to 15 seconds that we had before. And also those
30 seconds add up pretty significantly when we start talking about dozens of clusters.
But it's something that the trade-offs from the stability that we get the other times is significantly better than what we
were able to get when managing ourselves, simply because there are far less failure scenarios.
We don't have to worry about an EBS failure or an EBS volume that is all of a sudden slow for some
reason, or a zone failure, etc. All of that is built into the cluster and it automatically
recovers and we don't
even know that it's happened. That sounds pretty awesome. One other big Aurora announcement that
came out of reInvent last year was Aurora serverless, which to me sounds like the punchline
of a joke, as in I'm going to store my dad jokes in a database serverlessly. But it feels like a toy. It doesn't
strike me as the sort of thing that companies are seriously evaluating. Because when they have
large scale data, it usually tends to be needed in less time than it would take to spin up or
spin down an interface to that data, as you see with serverless technologies. Is that something
that's at all on your radar? Or are you also viewing that as something of a toy?
You know, it really comes down to performance.
So I have trouble imagining a scenario in which serverless architecture would actually scale both from a data and from a compute perspective without seriously degrading the performance of a cluster. So one of the reasons that MySQL and other databases are so performant
is because of the memory that's, or I'm sorry,
because of the index that's stored in memory
so that it doesn't have to keep going to disk,
which is significantly slower to look up where record sets are
or to retrieve record sets,
depending on what it is that you're querying for.
In a serverless architecture, in theory,
you just have a bunch of worker nodes that have capacity that are receiving a query and then going and retrieving that data set.
But I don't see how that would actually be performant without having an index that's pre-warmed up in memory across that cluster.
Now, let's say that Amazon delivers on that and that there's not a performance degradation for using a service architecture, that becomes really interesting simply because now I don't have to have static steps in my read and write capacity.
That's something that can simply scale up and scale down dynamically based off of time of day
or based off of business intelligence, really terrible query that wants to look up a lot of data.
So it is interesting,
and I'm skeptical that they're going to be able to keep the performance. But if they do keep the
performance, and they truly come out with a MySQL compatible serverless implementation, that's going
to be a huge game changer. One other thing on that is while I'm skeptical of the performance,
there are certain things that we currently persist in S3, not
because of the data size, just because of what they are. Those types of things become a real
viability in a serverless architecture. If you're not having to handle joins or other things that
require a warmed up index, Aurora, especially with master-master cross-region, opens up a lot of possibilities that would be difficult for a traditional file share like S3 or EBS or NFS, etc., simply because you have the replication.
So again, in disaster recovery, you not only want to be able to fail over to the secondary region, you also want to be able to fail back to the primary region, which is likely where you have your reserved instances and where you have other things already built out that are really, really nice.
So the serverless architecture, very interesting.
I'm skeptical that it's going to be performant.
Yeah, that's sort of where I land on that, though.
Obviously, I should qualify that I am not a database or data store person.
Usually I run away from things that I can't just represent as code and make come back.
This is the sort of thing that leaves a mark in the general sense.
Right.
So thank you for taking the time to speak with me.
Is there anything else you'd like to talk about
or that people should take a look at
that's associated with you in any sense?
Absolutely.
So I would love for everybody to check out Lucidchart.
Obviously, what we've been talking about
is the solution that's built on Aurora.
But the reason that we have selected Aurora
is because we have an awesome
product. We're really happy about it, really excited about it for building diagrams in the
cloud. Yeah, I can say that I'm a very happy customer of Lucidchart myself, which is not
incidentally how I got connected with you. But I build horrifying architecture diagrams of ridiculous
things that I shove into AWS. In fact, I'll include one in the show notes just to terrify people at home.
But by and large,
it's something that I have no ability
to represent things graphically.
And Lucidchart just makes it fantastically easy.
I'm starting to use that
for most of the architecture diagrams
that I need to represent
in something other than finger paint and crayon.
Well, thank you very much.
Of course. Thank you very much for joining me.
This is Screaming in the Cloud, and I'm Corey Quinn.