Embedded - 242: The Cilantro of Robots
Episode Date: April 20, 2018Christine Sunu (@christinesunu) spoke with us about the feelings we get from robots. For more information about emotive design, check out Christine’s website: christinesunu.com. From there you can f...ind hackpretty.com, some of her talks (including the TED talk with the Fur Worm), and links to her projects (such as Starfish Catand a Cartoon Guide to the Internet of Things). You can find more of her writing and videos on BuzzFeedand The Verge. You can also hire her product development company Flash Bang. Embedded 142: New and Improved Appendages is where Sarah Petkus offers to let her robot lick us. Keepon Robot (or on Wikipedia) Books we talked about: Accelerandoby Charles Stross Alone Together: Why We Expect More from Technology and Lessby Sherry Turkle (MIT site) Reclaiming Conversation: The Power of Talk in a Digital Ageby Sherry Turkle (MIT site) Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousnessby Peter Godfrey-Smith (Note: Elecia also wrote a whole octopus annotated bibliography in a recent post)
Transcript
Discussion (0)
Welcome to Embedded. I am Elysia White alongside Christopher White. Our guest this week is
Christine Sunu. We are going to talk about robots and their feelings, I think.
Hi, Christine. Welcome to the show.
Hey, lovely to be here.
Could you tell us a bit about yourself?
Absolutely. So I currently do rapid prototyping and product for hardware and emotive technology
at Flashbang Product Development. I also post tutorials on electronics and design at
hackpretty.com. I previously ran the developer community at Particle, and I worked on these
weird open source robots at BuzzFeed's Open Lab for Journalism, Technology, and the Arts. So most of my work focuses on emotive
interactivity in physical and digital objects. That means I work on getting people to feel
living emotions about non-alive things, and that might be an app or a hardware product or a robot
pet. So in other words, I work on emotive tech, technology that accounts for our humanity and makes us feel things. Okay, that really does lead to so many questions.
Before we get into how and why and what and again, how and why, we want to do lightning round where
we ask you short questions and we want short answers. And if we are behaving ourselves,
we won't ask you how and why there we'll see how that goes cool uh so favorite skeletal structure
oh um i like the bones in the wrist it's a lot more bones than you would expect and they all
have very unique names hacking making tinkering engineering teaching or programming that got longer than before
oh my god um you need all of those do you mean like what i like to do or it depends on my mood
i feel like um you know you need to tinker uh to hack you need to make to do enclosures you need to
engineer to make it really good there's like so many things. And then of course, teaching is just something that is fun and everybody should try.
Did you have a Tamagotchi?
Oh, no, I didn't. I wasn't allowed to have a Tamagotchi. I did later as an adult,
I thought it would be, I had trouble finding my keys and I thought if they beeped every once in
a while, it would be easier. It was actually really sad i then dropped the tamagotchi
on the sidewalk and killed it permanently and felt emotions it was really weird
it was like it actually permanently died well the follow-up question is no longer operative then
what favorite fictional rope so how long did it Okay, so that Tamagotchi lived continuously for,
I think I probably had it on that keychain for like at least four months.
I'm like a little clumsy.
I drop my keys quite frequently.
But yeah, I think it lived a pretty good amount of time.
I mean, it's not really a good excuse because I'm an adult.
So, you know, technically, if I was putting my mind to it,
I should be able to sustain the robotic creature, you know, continuously. But, you know technically if i was putting my mind to it i should be able to sustain the robotic creature you know continuously but you know uh what's your favorite fictional robot then
oh uh uh i have you read uh accelerando yeah by chris strass or chris strass yep yeah so i like the inako i didn't read it you remember it's like
this uh so he has a he has a robot cat and he uh updates the robot cat continuously and then
there's sort of like this whole complicated subplot around the robot cat uh but yeah it's
also just a i really like that book maybe i read it. What about non-fictional favorite robot?
Let me think.
God, there's so many.
I do really like the keypon.
The keypon just makes me laugh all the time.
It's hard for me to not laugh every time I see it.
It's, have you seen this?
It's just like,
it looks like two, um, kind of yellowy balls that sit one on top of the other and it has a little face and it was designed to, uh, interact with kids and to like help kids, I think, uh, interact
with music more emotionally. And so you have the robot that just basically dances. Its only purpose
is it dances. Um, it's extremely joyful. It's really cute. I think it's well designed. I think it's a good simple design. And yeah, sorry, that was a longer answer than I think I was supposed to get.
That's fine. Do you want to go next, Christopher?
Yeah, I was looking at the coupon.
Right? You should watch the video where they take it like through the city.
It's great. What's a tip you think everyone should know? Oh, understand the pop culture
context for the product you're building. When this comes to robots, it often means understanding how
whoever you're designing for, how that audience frequently perceives robots.
You know, I'm going to stop with lightning round questions because I want to dig into that.
So often engineers are superior about their rising above pop culture.
And you're saying that that is important.
Well, I think that it's to me, at me, the, at the end of the day,
it comes down to usability, right? Like people want you, you, you want somebody to use your
product. So if you want the most people possible to use and love and accept the thing that you
built, then you have to understand what's driving them and what they are likely to enjoy. And if
you, um, and you have to understand it so that you don't
accidentally put barriers that make them not want to interact with your thing.
So for a lot of the sociable robot stuff, a lot of that is understanding the context
in which we perceive robots largely, like in which the media portrays robots, in which
what does a friendly robot look like and what does an unfriendly robot look like? And I think that also differs
very greatly by what culture you're in and what part of the world you're in. So it can get kind
of complicated. Yeah, because I just for myself, if somebody made a robot look like R2D2, that
would automatically be friendly. Check all the buttons. If somebody made a robot that looked like R2-D2. That would automatically be friendly. Check all the buttons.
If somebody made a robot that was like, you know, looked like Hal with the red light,
I would, you know, immediately be intimidated.
But there's a reason why the Amazon Echo has a, you know, blue circle instead of a red circle.
You don't want it to just look like a giant red eyeball staring at you, right? Like,
and one of the, one of the cool things is that frequently when people have created robots
in movies, in television, in comics, they have created them with a lot of things in mind to code
them to be friendly or unfriendly. And as designers, we can work with that. Not only has it
already been imprinted on the audience that we're frequently trying to get to use our products, but
it also has already counted for some of the problems that we're trying
to solve. How do you make a robot look more friendly? Well, in the case of Archie Deetee,
you make him round, you know, you make him babble like a baby. You don't give him any like large
claws or like you might put one eye, but it's not clearly like a scary eye, you know? So there's
been a lot of work that people do on individual robots and movies to make them appealing or unappealing. Okay. So before we dig in further, I want to talk to you about some of
your projects. So we have some concrete examples to talk about. Do you have any favorite projects?
Oh, I have a soft spot for the starfish cat. Okay. So this, this was on buzzfeed and it was fuzzy on top and looked sort of starfishy
and a little frightening from there i'm sorry i i know you want it to be emotive but it was a
little frightening oh that's on purpose um so my what i called the starfish cat was the starfish cat emotional discomfort experiment. Um, so, um, so when I, uh, when I started at
BuzzFeed, I had a fellowship, um, at BuzzFeed through GE. Um, they, you know, paid me to be
there for a year to hang out and build these internet connected, um, devices. And so I don't
know if they thought I was going to build like a connected microwave or something, but I didn't. I built these different interfaces for connectivity
and different interfaces for tech to kind of experiment with how people saw and reacted to
exteriors and exteriors when they integrated motion and something, things that felt automatic. So the
starfish cat was an experiment in when you take different emotive cues, um, and mix them in a way
that might make people very uncomfortable. I frequently see products that do this by accident.
So you will see, uh, for example, I think my favorite was, um, they made a Furby, I think my favorite was they made a Furby in, I think, 2011 or something.
It was one of the newer ones that they tried to make.
And it had these terrifying glowing eyes because they used these, I think, OLED screens for the eyes.
And it makes sense from a parts perspective.
They wanted the eyes to animate.
But, I mean, it had the effect of when you turn the lights out, you just have this thing staring at you in the dark with these terrifying
glowing eyes, you know, which mixes cues with like this thing that is otherwise quite cute and fuzzy.
So I wanted to kind of, uh, examine what happens when you push that further and also kind of have
this example of like, look, this is what happens when you mix cues. This is the way you feel. Like that's also the way you feel when you run into these other things. Um, so, uh, I ended up making
the starfish cat. Uh, the idea was to have the top be this adorable, fluffy, sweet looking kitty
shape. So it has like this head and these like cute little ears that are through, I think, soft
PLA. So they're sort of flexy and you can touch them. And he has like closed eyes, looks very serene. When you walk
by it, it starts to meow pitifully and need these little claws. It's strange because then when you
pick it up, you realize that it doesn't just have two claws in the front. It has five claws. They
go all the way around five points on this terrifying weird starfish bottom. That's
rubbery. Um, there are five IR thermometers on the bottom of it that seek heat. So when, uh,
when it's closer to heat, it needs the two little claws that are the closest to the warmest spot.
If you actually pick it up and hold it to yourself, which people do, um, it will like
need in the direction of your
body heat, which means that it's trying to basically get to like the warmest spot, which
frequently is bare skin. When it senses that most of the sensors are on bare skin, it starts to
suckle you with a weird pneumatic motor that I got from China. So it's, it's like maxing out on a lot
of like the weirdness and discomfort and ambiguous signals.
So the ambiguous signal of a thing that has its mouth on you, you know, people think it's really cute when a dog licks them.
But they think it's really terrifying if, I don't know, a lion that looks hungry and is salivating licks them.
So it's you know very uh mixing cues how is this the second
show where we talk about robots licking you how do you know sarah petkus or wherever because you
guys would be good friends oh i don't i would love to love to talk to that person she's making robots that can taste people. for a long time. But you know, there's a lot of things that dogs have, like sensors that dogs have that are just so difficult to build into a dog robot, you know, like the olfactory abilities
that they have are just so far beyond what we would normally be able to do. And then on top of
that, you know, there's the opposite thing that also happens, or we build things into robots that
the dogs don't actually have. So apparently dogs are really bad with spatial memory. Um,
if you like put them in, you know, an eight arm radial maze,
they can't figure it out.
It's like weird stuff like that where I'm like, okay, so it sounds like,
you know, some of the things that we build into robot dogs are overkill,
which is something that I already, it's an opinion I already hold, but, um, you know, it's, it's overkill even if you compared
it to like a real dog. Okay. Moving on from incredibly creepy starfish cat. I think it's
cute. Yeah. That's the point. The comments on that Buzzfeed article were my favorite thing
that have ever happened. It was a mix of people being like, oh, it's cute. Can I adopt it? And then people being like, this is the creepiest
thing I've ever seen. And a couple people saying, is this real? It's like the cilantro of robots.
Yes. Okay. Tell me about fur worm. Oh, the fur worm. People keep asking me if I named the fur worm and I don't think I ever did. So, I was going to do this talk that was about the minimums that you need for the perception of life. So, how do you make somebody think that something's alive while doing as little work as possible?
You know, it doesn't.
That's the minimum, minimum alive.
Yeah. So it's like a heartbeat. I remember working on toys and we would, we would talk
about putting heartbeats into the infant toys because babies really like a constant.
Sure.
Absolutely. And I think for really, you know, young kids that don't have like babies, that's probably that probably works super well.
One of the things that I the thesis of this talk was really that like if you make it appear that it needs stuff, people will automatically fill in the blanks and say that's probably alive because a robot doesn't need things, but a living thing does. So if you give it enough consistency in its reactions to certain stimulus,
but also vary it with enough randomness that some of its actions feel emotionally random,
then you actually end up with an interface through very little work, very little code,
where people just go, oh my god, that must be alive. So I built the fur worm as an example of
that. It was just this, it's three servos, really simple.
And when you squish it, it squirms. That's it. If you squish it for longer, it squirms harder.
The degree of its squirming gets more, but it's not consistently more. It's using like
random purlin noise to vary the reaction. So it ended up being realistic enough that as I was
building it, it made me kind of
cringy. Like I would squeeze it and then I would start to feel bad. And so I built this for the
talk. I held it on stage the whole time. At intermission before I did the talk, I walked
around and I showed people the worm. And I started to feel really bad because they were saying things
like, oh my God, it reminds me of my dog. Like it's really cute. Um, and I didn't think it
would elicit that strong of a reaction. Uh, cause at the end of the talk, the plan, which I did do
was to break the worm in half to show that like, you really actually were bonded to this thing.
That was that I just explained to you exactly how it works, but it doesn't matter because your instinct as a human for empathy
is so much stronger than your logical mind. So, um, yeah, I broke it in half on stage and it's
funny because in the talk, there's this applause at the end that I don't remember. I remember
walking off the stage to like stunned silence. Um, there was a woman who came up to me afterward
who, um, introduced herself to me and said,
like, you know, I'm a cognitive scientist and I knew everything that you were doing.
And I still cried when you broke the worm in half.
So, um, this technology is really powerful.
I mean, you're talking about, um, you know, a couple of lines of code that are quite simple
essentially, but which gives people such a deep reaction and such a deep instinct
that the thing you're holding might be alive and that it doesn't, it has to do with, you know,
being human. I totally understand this because you put googly eyes on something and suddenly
it is friendly. And even just talking about this robot, which I knew was a robot,
you said you broke it in half and I'm ready to cry.
It's, it's rough. I was, I almost didn't do it. It was like I had planned the talk for weeks and
I didn't think that, I didn't know that people were going to react that strongly. So I, I almost
didn't do it when I was standing up there.
Afterward, I kind of backpedaled because I finished the talk, but then I was still up on the stage.
And I was like, it's okay.
It's not dead.
I mean, it was never alive, but it's also not dead.
I'm not sure you're making it better there.
Doesn't this suggest that, I don't know where I'm headed with this, but it just scares me a little bit that we're doing something, you're doing what that object wants you to do,
what that thing you perceive as a living creature wants you to do, what it needs to survive. But what you're actually doing is whatever the person who created it wants you to do,
whatever they think you should do, and whatever they predicted that you will probably do once you interact with it.
So it's kind of ushering in this interesting dark new era of emotive technology where hypothetically somebody could use it for great evil or, you know, simply carelessly, like even if you weren't doing it, even if you were using
emotive tech and you weren't trying to harm people, you could still very easily do so by
not understanding the full implications of the thing you were building or trying to build
something very well for purpose A and then it turns out to be disastrous for purpose B. I had talked about in that talk, I think, A, the hypothetical situation in which somebody
builds a friendly factory robot, a robot that is friendly to its co-workers.
So they're more likely to accept it.
It increases team bonding and camaraderie.
But what do you do when it malfunctions on the line
and you need to pull the plug? In the, you know, 10 seconds, two seconds, 500 milliseconds that it
takes you to realign your perspective and say, no, it's a robot. It's okay for me to pull the plug.
Is somebody who is actually allowed going to be hurt. So there's a lot of interesting things that come
up here because we are really affected by the perception that something might be alive.
And we're strongly affected by it at very small minimums. One of my favorite Reddit threads of
all time is about people zoomorphizing objects and people start talking about their Roombas.
And people bring up the fact that the Roomba is, you know, it doesn't really seem alive in any way,
but people still react to it in a way that's like, oh, I need to help it when it gets stuck.
People will pay more money to repair a Roomba that they own, even though it costs less to buy a new one.
My favorite story was a guy was saying that he used to jokingly to his friends refer to the
Roomba as his son. And you know, like, so when it would mess up, the person would say like, you know,
oh, I, I'm so disappointed in him. But then the Roomba broke and the person felt so bad that, you know,
he or she like paid all of this money to have the robot repaired. Because it was just, you know,
even as a joke, you make a joke about your relationship with the robot. You say, I'm
playing this game. But not all of you is playing. Some part of you really starts to act towards the object as though
it's alive as though you have an existing relationship to it as though you are its
caretaker and it relies on you so something you said uh i just want to talk about for a second
you said the minimal set of things to kind of relate to. Do you think,
it seems like it's easier to make a artificial construct
that's relatable and lovable by a large group of people
by choosing the right things, the right attributes.
It's easier to make that something to bond with than perhaps other people.
This is one of the huge dangers, you know, that you might be able to create interfaces where
people would prefer the interface to another human. I mean, this is something that we already
see with cell phones. I guess cell phone is kind
of like the, you know, older millennial term for it. This is what we already see with phones.
You know, it's something where if you get enough stimulus from, you know, people have started
talking about this more that this amount of stimulus you get from your phone and that the
predictability of conversation is not only more interesting to a lot of people at some kind of weird basic level than another person, but it also changes
the way that you do interactions in person with real humans. So yeah, I mean, the, you know,
we build our tools and our tools shape us. This is a real strange thing that's happening now.
Now that we have the ability to really create interfaces
that are more powerful, more emotionally powerful, and wherein we might be able to automate one side
of the conversation. And yet, we're going to have to remember that you can't, if you have to choose between the cute fuzzy robot and the guy you don't like, one of them is alive and the other isn't.
Absolutely. It's a very strange field. am a large advocate for interfaces that, you know, for emotive tech, I think that it is better for
people were we to build more interfaces that rather than connecting us to an automated space,
it connected us to ourselves or each other. So, you know, the reaction of the object,
it's not, you're not grounding the object in, it is a robot in of itself. You know,
you're grounding the object as it's reacting to robot in of itself. You know, you're grounding the object
as it's reacting to my actions. It's something that, you know, I work with and that I work on.
It's, um, you know, this is something that has to do with, um, that's very clearly having to do with
me, um, versus, you know, it's reacting fully on its own and it's an automated secondary being.
Uh, so one example, a very simplified example of this,
because I realized that that starts to sound very confusing,
is the robotic fridge cat that I built
when I was doing everyday bots on the verge.
So it's this fridge magnet with eyebrows.
And when you, it's for people who frequently are home or work from home. Um, and
if at the times when you want to eat, it makes a hungry, sad face, you know, it's eyebrows go up
into sad position. Um, and then at the times that you don't really want to be eating, because
frequently people say I'm snacking too much or like, I don't want to, you know, it makes this
more angry face. So the more times that you open the fridge, the angrier the face gets. Now, it's very, and this is a really simplified
interface, right? But it just gets you more. It's this more glanceable and emotionally
understandable thing. And what you've done basically is you've doubled your conscience,
right? You've created an external version of you that reacts consistently to a stimulus that you
were already like, I don't want to do that as much. Because this robot is a version of you that reacts consistently to a stimulus that you were already like,
I don't want to do that as much. Um, because this robot is a reflection of you and you know that I,
I would argue that it's easier for you to distance yourself from it having its own personhood. Um,
it's, you know, like it, some of these ideas are still going to be quite difficult in that,
you know, every time you play the game of it's, it's an external being,
like it more and more becomes that. But yeah, so that that sort of thing, you know, we can build
these interfaces that help us that connect us to ourselves and our goals. And that can be
potentially extremely useful, you know? Yeah, but this is I mean, the the fridge cat was pretty simple it was a couple motors
to control the eyebrows and a door fridge open sensor and then a little controller i mean it
wasn't it wasn't super complicated tech right it's not it was not a intentionally um
intentionally emotionally psychoactive interface it It was a, you know,
it was the simplest thing possible so that you could build it if you wanted to. And you could,
you know, start using some of the emotive tech principles to help you in your goals.
Um, yeah, I think there's a really good chance that you will increasingly see people building,
uh, sort of differently contexted, contexted and more aggressive emotional technology.
And that's going to be a really interesting world.
And it's different than gamification.
Gamification is more when you create an environment that makes people do things to win. Like my Fitbit tells me I only have 250 more steps
before I meet my daily goal.
Or even Pokemon was gamification.
It got you to walk by letting you find creatures.
But emotive tech is more taking your Tamagotchi for a walk or taking your Fitbit for a walk.
It doesn't have to be one or the other, but they are different, right?
Yes. I mean, I think that it's a similar category and in some ways you can track the way in which
we increasingly perceive gamification as a crazy powerful tool that can be good for people or bad
for people. I think that's probably what you'll see
with emotive tech as well. Because, you know, sometimes gamification helps people accomplish
their goals. It helps them, you know, do self-improvement. And sometimes it causes you
to spend way too much money in, you know, a game with in-app purchases. So you're likely to see a very similar line, I think, for emotive technology.
But instead of our desire to win, it will be manipulating our other emotions.
Yes. And one of the things that's a big danger, one of the things that's, uh, potentially dangerous about this too,
is that, um, you know, with gamification, we can say, we can look at, um, an interface
and say, oh, I know what trick you're using.
You know, you're using this, um, gamification trick, like you're using a points trick, right?
Like, so I, you're trying to get me riled up through like feeling like this is a game.
So I'm not going to play your game. You know, we can, we can do that. You can opt out in some ways.
And some people will have more trouble opting out than others. But you know,
I think most people would say like, okay, I can, I can like reasonably opt out of that versus it's
much harder to opt out of an actual emotion that you're feeling. It's harder to opt out of
the natural instinct for human empathy. And like, what does it mean if we were able to opt out of that? Like,
what does that even mean? You know, like I built a robot where it struggles and then dies and you
feel something about it. Like if you, if somebody were using this as a tool to manipulate you,
you know, how easy is it to opt out of that? You'd basically be saying,
like, well, when I see something struggle and die, I'm just not going to feel anything about
it. And I don't know that we want to really go to that place. Maybe you've made a sociopath
detector in that case. It's like, you know, and it's hard. I mean, you know, again, we, you know,
we shape our tools and our tools shape us, you know, if we start using this really frequently,
and if we use it carelessly,
then you force people to build an immunity. And there's a lot of questions about how that
guides society. And this is some of the things that I think about. But, you know, I'm not,
it's not technically new either. Like basically you've been dealing with this sort of thing ever
since we were able to build these automated interfaces.
There's a lot of really beautiful research that's been done by Sherry Turkle about this.
She has some stories that can just be really devastating.
She talks a lot about the effect of technology on children and the effect of questionably alive technology too. So, you know, you give a kid a robot and up to a
certain age, you know, they're deliberating about like whether or not it's alive and some, and the
justifications for whether or not it's alive change. So, you know, really young kid might say
like it's alive because it has eyes. And an older kid might say like, oh, it's alive because it tries
to cheat. You know, like there's interesting
like developmental markers that you can see. But it's for all the kids, it's, you know, it causes
a remarkable emotional response. So, you know, the robot as an interface does not behave,
doesn't play by the same rules as a human. And as adults, we can look at that and say, okay, it doesn't play by the same rules as a human. So that's okay because it's a robot.
But for a kid where the ambiguity about whether it's living or not living is high,
it doesn't really work that way. So there's one story where they had a kid who came in to, um, to the MIT lab to interact with Kismet and the, uh, which was,
you know, this robot with a very expressive face and the robot was malfunctioning and the kid just
was horribly depressed by the robot malfunctioning. And it was this kind of, you know,
the reaction of the child was just so extreme,, you know, the kid felt ignored. The kid
felt that this thing that she had been prepared for, that she was really excited about, was
not going to happen because the robot hated her. You know, it generated all of these feelings about
her identity and herself in relation to an interface that didn't go off by accident and, you know, was highly simplistic. So,
I don't know. I mean, it's something that I think is worth being aware of. I don't
think that it means that we shouldn't build robots. I don't think it means we shouldn't
build emotive robots. I just think that it's important to be aware of the potential effects on people and try to build them responsibly.
Yes, yes, I agree so much.
And since I often fall into anthropomorphizing everything, please be careful with your emotive robots.
And don't break them in front of me. My God. Moving into the technology pieces, how did you learn the things you needed to learn to make these?
And I know your background is not traditional EECS.
So maybe that's part of that question.
So I like to think of my background as being in people, like everything that I've done,
it's been how do people perceive different things and how do they work with those and
how do you communicate with them? How do people communicate with each other? How can I effectively
communicate with you? And I feel like the coding and the robotic stuff got layered on top of that, which is interesting.
I tried to learn C when I was like 13 or 14, and I quickly lost interest.
I think because there wasn't enough example-based things.
I didn't have any ideas of things that I could build.
My dad said, why don't you build an algorithm that when I put in a number, this is really funny. He said,, oh, I'm going to, I want to be able to put in a number and then, you know, I want it to
ask me a couple of questions and then be able to mathematically figure out and guess what number
it was. So he, uh, you know, I, I said, okay. And I, you know, an hour later or something came back
to him and he put in the number and it asked three questions and then said like, your number was five.
And he's like, wow, that's incredible. And of course I was just passing the input back to him and he put in the number and it asked three questions then said like your number was five and he's like wow that's incredible and of course i was just passing the input back to the
output and asking three random questions but you know and so i rapidly lost interest because i'm
like well it seems like there's a lot of shortcuts you can do and i'm not sure what i'm supposed to
do with this um later when i was in college i I started using programming as a way to do better research because I was doing these various kinds of scientific research.
During college, I was working in a biophysics lab where I had to do a bunch of image processing of these paramecium moving in dishes through MATLAB.
Then after college, I was working in a neurogenetics lab when next generation sequencing was still like next gen. So there's like nothing written for it. So I had to do a lot of code for that. And I had a lot of questions about how do I make things more interesting? How do I bring weird physical objects into the world. Like I got really into thinking about that, but I didn't actually start building a lot of things until, um, years later, um, when I moved to San Francisco
and, uh, I was working for particle and doing things with the developer community and using
the tools a lot and, um, really learning to rapidly, rapidly build weird hacks and uh so yeah that's kind of how i how i ended up
doing the things i'm doing by by just building a lot of stuff and thinking about people
and particle is the they make photon and they like make electron they make a number of small
systems that are internet enabled.
Yeah, so they make internet connected dev boards
and the software infrastructure you need to run them.
So it's a really cool platform.
It's easy to use.
That's one of the main reasons to use it.
It's very fast to get started
and then you have the ability to just flash the...
If you accidentally put the dev board into you know, flash the, um, if you accidentally put
the, the dev board into an enclosure that you can't open, you can, uh, flash code over the air,
which is really useful. So I, I frequently use those boards partially because I know the system
really well from working there and partially because I have like a thousand of them. Um,
like I, uh, I worked there in my, my business partner worked there.
And so we just have like so many photons and electrons in the house, which is great.
When you're making a project, what takes the longest making the project,
making the video or writing the instructions? Depends on the project.
I think that the Fridge Cat video took longer than actually conceiving of
and making it because it was so simple and straightforward.
But the coffee maker, when we did the automatic coffee maker,
tweaking that 3D print just took forever and it's still not perfect.
So, you know, in that case, the build took a lot longer than the video. Yeah, there's,
I think it just really depends. And why do you write the instructions for them? I mean,
it's fun to make these things. Why do you make them open source? Why do you
let other people build them? Right. That's a good question. You know, I,
when I was trying to learn all this stuff, and I mean, I'm still learning so much of it. My,
so much of what I do has been assisted and helped by people putting clear instructions online.
I would not be able to make the things that I make if people hadn't documented
their work. Because a lot of what I do, you know, I don't have formal training, a lot of what I do
is I ask one of my friends who does have formal training what I should Google, then I find a bunch
of examples and I cobble together what I need. And I actually learned a lot while I do that. So I
want to be able to do that for other people. I mean, not to mention that documenting your work is just a really good way to go over what you did and have a record of it for yourself and, you know, be able to build it again this and are daunted by the amount of work that it might take?
You know, somebody who wants to build a RFID tag for their pet door or something like that.
What advice do you have to keep them motivated and going?
Well, one is that you never know until you start. It might be a lot easier than you're thinking, or you might find a shortcut that makes
it a thousand times easier. And the other is there are people everywhere building things a lot. And
so another person, you know, don't be afraid to ask for help. Another person might be able to
offer you a shortcut. Don't be daunted by the
amount of stuff. And you know, if you hate it, I guess you can always stop. But chances are,
you're just going to really get into it. And it's really rewarding in the end.
How do you decide what to work on next?
Oh, gosh, I have to make these big lists. I'm definitely in the camp of having too many ideas.
So I end up having to make these lists. And then I'm definitely in the camp of having too many ideas. So I end up having to
make these lists and then I have these huge arguments, um, with Richard Whitney, where
we're just like playing design chicken about different, different projects that we have.
And, uh, you know, we, we just have these huge discussions about what is actually good to work
on and what is interesting to work on both from the perspective of you know uh
why is it why is it interesting for people why is it a good or bad product why why does it um
potentially help people or not help people uh a lot of times i just end up gravitating towards
the more emotive side of things i think i just really like building interfaces that create a
feeling you know and frequently joy is the feeling it's not all the starfish cat discomfort side of things. I think I just really like building interfaces that create a feeling,
you know, and frequently joy is the feeling. It's not all the starfish cat discomfort experiment
and the fur worm. I feel bad that those are the ones we talked about, but you know, I like to,
I like to make things that could make people happy or, you know, create, help them understand
something better. So I have a preference towards those as well.
Do you view what you do more in line as engineering and consumer products or more in line with performance art?
I think it's a little bit of both. You know, in my professional life, I definitely am doing, um, more on the side
of engineering and consumer products. Uh, I do more of the design and visual elements and, uh,
and I'll actually build like full rapid prototypes and that's called for too. But, um, it's in some
ways, I feel like a lot of, um, a lot of content and a lot of product is weird performance art.
You know, you're trying to create a system that reaches as many people as possible.
And then, of course, you know, I do have like this kind of weird art side where I'm doing things that feel more performative.
And they're mostly separate, but I feel like in different parts of my life, I do both.
The emotive technology part seems very performance art style to me because you, I mean, that's one of the things about art is it generates an emotion.
And so.
Absolutely.
And it is when I make the exaggerated versions. The truth is that there's a ton of emotive tech that we encounter every day already, you know, different interfaces that evoke emotions. And there's a lot of really
subtle ways that you can insert those into products that people do frequently. And, you know,
again, also by accident, every time an interface moves, we already start to feel, we already start
to do a morphize it. So it actually does overlap with some of my,
some of my professional work, but I don't build exaggerated interfaces in my professional life.
I just build, you know, correctly calibrated ones. Where does Hack Pretty fit in?
So Hack Pretty was something that I started doing because I really wanted, um, I missed making, making content
that was, that was more in the education space and more, you know, meant to teach people about,
um, what they could do in terms of design and how to do it in terms of, um, code and hacking. So,
um, that's just sort of a thing to do on the side. I like it a lot.
It's recently, I feel like I haven't been as good about putting videos up.
So it's just kind of become a repository for different thoughts I have.
But yeah.
Let me ask you a question about when you're making things for yourself,
not for your job.
You mentioned you have a pile of photons lying around,
you do some 3D printing and servos.
Do you feel like being constrained to what you've got available for tools is helpful?
Or if somebody threw you a million dollars
and a completely stocked shop with assistance,
would you have things that, oh, now I can build this?
Yes. Um, so I, I would,
I do actually also think that design with constraints is helpful. Um,
I will frequently, you know, I know you said like, not in,
not in your professional life,
but I will frequently like ask people for all their constraints because I
think it's more fun to design with constraints. Um, and you know, it's,
it's a, it's a cool challenge even in my regular
hacking life, but there are certainly things, or if I had an unlimited budget, I would,
I would be trying to build them. I'd be trying to like, you know, get in contact with the best and
most amazing people in various fields to try to like ask a million questions and then also get
them to help build things.
One of the problems I've come up with with my robot is I need something that is like applied robotics, kind of like the computer vision.
Now you can go into the depth and the math or you can just use OpenCV and follow along
on the examples. But robots still seem to need a lot of
math with localization and kinematics and whatnot. How do you find what you need to know?
I mean, examples online are one thing, but sometimes you have to go deeper. How do you
find these things? Definitely. So I ask a lot of questions to people in my local in-person
community. I know some folks who, I mean, I consider myself very much an amateur in these
fields. And I know some people who are far less amateur and some people who are highly
professional. So I'll like frequently ask them questions and I'll also ask them, you know,
how can I do it easier?
Is this the wrong way to think about it?
And I always ask, what should I Google?
I think, you know, I try to shortcut things a lot
because I try to make things that,
in my, not in my professional life,
in my, you know, hacking things together life,
I try to shortcut
frequently because I, I want to, um, the effect that I'm looking for is often more powerful if
it's done more simply. Um, so that's not something that generally helps when people are trying to
build like a capital R robot that does, you know, particular tasks, but that's what I do.
I say, what is the easiest way that I can get to
that? Or is there any way that I can just encourage the human machine to do that instead? One of the
early iterations of the starfish cat, I think we had talked about, um, cause this was one of my
design chicken things where, you know, we're like, Oh, but what if it, this, Oh, what if it, this,
um, and I, uh, I think one of the early iterations had it had locomotion in it.
And then I was like, oh, I don't need to do locomotion.
I just need to give it an emotive enough movement that people feel like they have to move it
to different spots.
Well, yeah.
And I mean, if it looks like a cat, people want to pick it up.
And if it's doing the little kitty paws, then they think it's safe to pick up so
exactly and if it rolls over on its back and then the knives come out
yeah oh sorry my actual cat just like uh just jumped up and did a thing
do you have any uh projects you're working on that you can tell us about? Oh gosh. So let me
think. I'm doing some various things with biomimicry. This was a little bit of an older
project that I had right after I did the talk with the furworm. I wanted to make an updated
furworm that wasn't intended to be broken, but still exhibited, you know, these different minimums. And so I did a design for a robot and
I printed it out and put it together where it has like a very concrete skull. And it's actually
based off of like the, uh, the skulls of like ferrets and those sort of shaped creatures,
you know, long fuzzy tube shaped rats. Um, and, and, you know, so you end up with this,
with this critter that ends up having, um, what feels more correctly aligned when we look at it
as a face and as, um, a living thing, because the, um, the skeleton underneath it is actually
based off of, you know, biological structures. So that was something I was working on.
Working on another simple emotive robot that doesn't locomote,
but does a similar, you know,
emotive movement thing with just as two little front paws.
So stuff like that.
Yes. And, and if you had Christopher's million dollar shop,
what would you be working on?
I don't have that.
Well, that's imaginary.
I should, imaginary, yes.
I would be working on augmentation interfaces as a large category.
I think that a lot of what we do now and a lot of what people are excited about with technology and business is automation.
You know, you say, how can we make it easier to do? How can
we have something else do it for us? The question that we're not asking that I wish we would ask
more is how can we do augmentation? How can we do things? How can we, how can we create and use
interfaces that make us smarter, that make us faster, that make us stronger, that make it easier
for us to do the things that we want to do that leave us in the driver's seat and give us control and allow us to further human progress that way. Yes, there are
many things on that list that I would want to play with. Christopher, why don't you have that
laboratory ready for me? Give me a million dollars. Okay, I have one more thing to ask you about that's uh completely
in a different direction i think um you wrote the cartoon guide to the internet of things
yes what's up with that um so when i was at buzzfeed you know i was i think my title was
internet of things fellow so i sorry yeah i know but i um so i wanted to um i found myself frequently explaining to people
because this was you know i think pretty much in the heyday of iot hype i i found myself frequently
explaining to people like what iot actually was um and why there was hype and why it was
potentially important and you know what was potentially good or bad about it. So I decided that I should make a very simplified form because you would
frequently do a search about, you know,
what is the internet of things and get a lot of company documents about,
um, that were obligated to talk about it in a way about,
it's going to be the hugest thing. We're going to make so much money.
It's like going to be so amazing.
These are the number of devices that are going to come online and contain no actual information for like your mom about what that was.
So I wanted to create the document that you could hand to a parent or somebody who is highly not
technological because some people's parents are highly technological and say like, okay,
this is what it is, you know and it did you draw everything is it computer
drawn hand drawn oh no i do it yeah i like to draw things so i uh i drew that i also there's
a couple of the i really enjoyed um making content at buzzfeed there's a couple of the things that i
i did where i drew all the graphics for it, just because it, you know,
I didn't want to bother the artist there to draw the graphics for me.
Yeah. I'll, I'll frequently like doodle for fun.
And that's one of the ways I do design is to do these like kind of visual
mock-ups.
I'm interested because I also doodle for fun and I have the Narwhal's Guide to the Bay's Rule
which I don't know if anybody's ever read it
but I find it hilarious
that's amazing
and I never quite know what to do with these comics
and I mean you did it for BuzzFeed
so that counts.
I put it on my website and then wait for people to comment, and they never do.
Yeah, it's really hard to get content out.
I mean, content's kind of a bad word.
It's kind of, I mean, I remember somebody, I was like, oh, yeah, I have this blog, I have this podcast, and they're like, oh, you produce a lot of content.
And I'm like, that's not what I think of it as.
I produce things that I don't, oh my God, it is content.
All right, well, do you have any questions for us?
Anything you're working on that you want embedded software engineers advice or anything?
Oh, gosh. I mean, I'm sure that like what I do is way too simple to actually get advice on.
I think one of the things I ask everybody is, is there anything you think I should read?
Yes, so many things.
Christopher, do you want to answer first while I collect my thoughts?
What makes you think I'm going to go any faster?
Things you should read. Wow.
I mean, there's the way things work, and that's been updated.
How things work.
Or is it the way things work?
I think it's the way things work, and that's been updated. How things work. And that's how things work. Or is it the way things work? I think it's the way things work.
Because that's just so nice on describing little things.
And it is amazing how quickly the little things build into the big things.
There's the book on motion, making things move.
Yeah, I just put that in the don't read pile.
Okay, don't listen to us.
So not that one.
Let's see.
For things like this,
a science fiction is all.
What did you read when you went through your robot arm stuff?
I read a lot about the robot operating system,
which is very complicated
and probably not that relevant.
But the idea that there are thousands of people working on small modules for robots to help them do all the things a robot needs to do is pretty powerful.
But it requires pretty big processing because it's all distributed and not efficient for small things.
I mean, science fiction is always where I go because it's got all the good ideas and it
shows the pathway for some of these emotive technologies leading to bad. I mean, we've
read the story, right?
Yeah.
Seen the movie? Got the t-shirt?
Yeah, it's all there.
Okay, so the book that I
would suggest most,
aside from the technology and science fiction
and all of the data user manuals,
blah, blah, blah, Other Minds,
The Octopus, the Sea, and
the Deep Origins of Consciousness
by Peter Godfrey Smith.
So the idea, I mean, I read a lot about octopus.
I don't know why.
I just really like the idea of them.
But in this book, he talks about the history,
the natural history parts, the science,
but also the idea that their brains are huge,
and yet they're completely alien from us.
And this idea of the alien-ness was striking to me because if we ever do meet aliens,
we might need to take on this idea that they're not like us.
And that's fine and that's good, but you can't assume that something's stupid just because it's different. And the way octopus intelligence is working is just miraculous.
But I think that's a whole show or at least a blog post.
Yeah, that sounds awesome. I'm definitely going to read that.
Yeah. I mean, I love books, so we could be here all day with the books
um but maybe we should actually go about our weekend which would be nice because if the
weather is as good there as it is here it's going to be an outside weekend yeah it's looking good
today chris do you have any other questions well you said we want to go to our on a weekend
christine do you have any thoughts you'd like to leave us with?
Oh, gosh.
I guess one of the last ones that we talked about, which is, you know, when you're building something, try to focus on augmentation over automation.
It will be better for you and also the rest of the humans.
Excellent.
It's always appreciated by the rest of the humans.
Our guest has been Christine Suno, creative director at Flashbang Product Development and
creator of Hack Pretty. Thank you for being with us, Christine. Thank you. We'll have show links
to many of the things we talked about, including Christine's blog and her company.
I would like to thank Christopher for producing and co-hosting. And of course, thank you for
listening. You can always contact us at show at embedded.fm or hit the contact link on embedded.fm.
A thought to leave you with as long as I'm on the idea of octopus and cephalopods. I've been
reading a lot about cephalopods and their big brains.
Take cuttlefish.
They have skin that changes color,
but they don't seem to be able to see color
with their pretty advanced eyes.
Instead, it's like their chromatophores,
the color-changing mechanisms on their skin,
are wired into their brains
without any conscious control.
So when you look at a cuttlefish's skin, you may be seeing their brainwaves. You could see their thoughts.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are Logical Elegance and listeners like you.