Embedded - 375: Hiding in Your Roomba
Episode Date: June 3, 2021Brittany Postnikoff (@Straithe) spoke with us about scary robots, neat stickers, and contributing to open source projects. Brittany’s website is straithe.com and her sticker channel is twitch.tv/str...41the. Her github repo has curated reading lists on technical topics. She’s working at Great Scott Gadgets, maker of a variety of hardware tools including Luna, a toolkit for working with USB. (This was mentioned on a previous Embedded show, 337: Not Completely Explode with Kate Tempkin.) And if you want Embedded merchandise like mugs, mousepads, and wall art, we have a store for you.
Transcript
Discussion (0)
Welcome to Embedded. I am Elysia White, here with Christopher White. Our guest is Brittany
Posnikoff, also known as Straith. We'll be talking about robot social engineering and
other things.
Hi, Straith.
Hi.
Could you tell us about yourself as though we saw you on a panel
about robots? Sure. So right now I am a researcher and community manager for Great Scott Gadgets.
And otherwise I am a troublemaker. I primarily just look at how things can be changed or broken and adjusted using
robots, specifically the emotional side of robots. Do robots have emotions?
That is a very big philosophical question and it depends on who you are and what your perceptions of emotions are.
But I'm inclined to be somewhere in the middle.
All right.
Well, we're going to ask more about that and other things.
But first, we want to do lightning round,
where we'll ask you short questions and we want short answers.
And we'll try not to ask for more details.
But who knows?
Sure.
If you were to make a Gibsonian cyber deck,
what would you put in it?
I keep thinking food.
Should we bring back
the dinosaurs?
Yes.
Preferred listening one programming?
Electroswing.
It's a genre I've never heard of. I'm going to go look it up later.
Are fake tattoos a type of sticker?
Yes.
Googly eyes or puffy animal stickers?
Googly eyes.
If you could be a sticker, what kind of sticker would you be?
Holographic.
Oh, right.
Holographic sticker.
Do you like to complete one project or start a dozen?
Start a dozen.
If you could teach a college course, just one, what would you want to teach?
Ethics.
What is your favorite fictional robot?
Bender.
Do you have a tip everyone should know?
Turn your clothes inside out before washing them.
See, but I hate that because then I have to turn them outside in after drying them.
It's so much work for me.
So much work.
It pills less and it looks better longer.
All right.
Okay, so we've touched on stickers and robots and emotions.
Which one do you want to talk about first?
Let's start with stickers.
Okay. So you're into stickers. What does that mean? You have a Twitch stream where you talk about stickers. I don't understand. So stickers are a big part of hacker culture.
And I love seeing the stickers on the backs of people's laptops or on the other devices they carry.
And I had this question about what does the sticker mean?
What does that sticker mean?
And after talking to people,
you realize that stickers have so much history and story behind them
that kind of gets lost if you don't archive it somewhere.
So I've been working on this idea of like a sticker archive for the stories about stickers so we don't lose that information as people leave the community or move on or other things like that.
Okay, that's genius.
So this is like there are many different Hackaday stickers and some are older than others and some you could only get at certain conferences. Is it that sort of
information or is it the emotional attachment people get to their stickers? You know, both.
I think all stories are worth writing down or collecting or having on a stream and sharing.
And for me, it's just this idea that, you know, it's a big part of our culture as people in tech and it's nice to
collect that culture somewhere. So some of the stories are just like, you know, I saw the sticker,
I wanted to be friends with someone because of the sticker and now we are. Or sometimes you get,
you know, the stories behind the, this is not a camera sticker and you want to like,
why did somebody make that? And you get to hear that story that led up to somebody doing that creative design. And I think both are really exciting just to,
you know, learn the thoughts that people are having around these cultural symbols.
Okay. So you're probably somebody who knows the answer to this question I've had.
When I started in tech, it was when Friends was on television and, um, or the early seasons.
And I don't, we had laptops and stuff.
They were, you know, garbage compared to now, but I don't remember having stickers. I would
have put stickers all over everything, but I don't remember doing that. And I don't really
have a good sense of when that kind of started. I remember when I started putting stickers all
over my laptop, but it was only like 10 or 15 years ago, maybe. Do you have a notion of where this got
started? No, but now I'm going to go and research it. Okay, good. Report back. I will. This is great.
Thanks. So we recently re-aired a show with Sarah Petkus, who then did our logo and stickers. And one of the things that I like best about our
stickers is that some people see it as a radio head, or some people see it as a radio, an old
timey radio, and other people see it as a robot head. And I love that ambiguity.
Are there things you look for that make for good stickers?
There's a few things like color, design, the quality of stickers.
Like I have some stickers I've put through the dishwasher and they're fine. There are some
stickers you leave on a laptop for a week and they're gone. So the longevity is important to me.
And I also try and make sure messages are positive. Like I have a whole stack of stickers
that I will not interview people of stickers that I will not
interview people on that I will never put on my laptop just because they contain messages I'm not
comfortable with. So, you know, just being positive, having that good design and also using
ethical creators as well. So there are some sticker companies that again, won't use. And
there are other ones that, you know, put a lot back into the company. So there are some sticker companies that, again, won't use. And there are other ones that put a lot back into the company.
So there are a number of things I look for.
What are some that you will use?
Because I think I'm ready to switch sticker manufacturers.
I've heard great things about Sticker Ninja.
And I've heard great things about, I think it's Sticker Giant is another one.
But right now there's so many people looking for a better place to go and I have to try out more myself still.
I understand.
I'm looking at Chris's laptop.
I know, I keep wanting to turn it around, but it's being used to record this podcast and that would be dangerous. And I know that Ben Krasnow's wrench in a beaker for his YouTube, what is the name of that?
Channel?
Yeah, his YouTube channel.
What is the name of it?
Oh, Applied.
Applied Science?
Applied Science.
And Matt Godbolt's Compiler Explorer are both logos that have no words on them and I wouldn't be able to tell
what they were without knowing do you what's your opinion on that kind of sticker the
the you have to be in the know to recognize it those are personally my favorite stickers
because I don't put stickers on my laptops that have words because I think it's more about just the picture for me.
And also it's a great conversation starter.
Like if you don't know what a sticker is, going up to somebody and asking, hey, what's that sticker is a great way to make new friends.
And one of my favorite things about about sticker culture.
How did you get into it?
I mean, was it one of these conversations where you went up to someone and said, what's the sticker?
I mean, I do that all the time because I'm just very curious.
But I think a big part of it for me was just, uh, you know, my first DEF CON, I saw
these stickers.
I'm like, wow, there's so much color and vibrancy to this community.
And I mean, of course the LEDs helped, but, uh, the stickers for a good introduction and
people are just like, here, take my sticker.
I'm like, you're just going to give me a sticker.
And at this time I was a student, like anything free made me happy.
Um, so of course I'm going student, like anything free made me happy. So of course,
I'm going to throw stickers on my journals and stuff. And people are like, no, you have to put it on your laptop. And then all of a sudden it was like big choice. And I was like, well,
now I just need to collect all of the stickers and I need multiples of all the stickers
so I can stick them on things and be happy. And so now I have a huge sticker collection.
And of course I use some of them,
but some of them I have just as an archive
and it just has kept going from there.
I get other people's stickers to send out
when I send out our stickers
because I don't want our stickers to be lonely.
That's normal, right?
Absolutely normal.
I have so many friends you would love to spend time with
if that's your mentality around
stickers. I don't put them on my laptop, but my toolbox is covered in stickers.
There's a sticker on your laptop?
There's one. There's one embedded sticker on my laptop. I don't think that counts. That's more of
a, here's our company property. Do you think there's a crossover between the sticker kind of ethos and the badge thing?
Very much.
There are a lot of people in both communities that share artwork back and forth.
And for me, stickers is kind of step one before I start getting into badges. Because so many of the cool conference badges have great PCBs that have fun designs.
And some of that uses the same skills as creating stickers.
So it's kind of like a natural progression to me.
So going back to robots, tell me more about robot social engineering.
I mean, it's not really about tricking the robots to give you information like human social engineering, is it?
So one aspect of it is, but for my master's thesis, it was more about using robots to social engineer people.
Okay.
I mean, if a robot asks me for my password, I might give it to him because it's a robot.
It doesn't care.
What?
I don't know. I just can imagine being in a situation where I unwisely trusted a robot because it wasn't a person.
Fine robot. Like, is R2-D2 coming up to you and beeping, give me your password?
Pretty much, yes. That was what was in my head. No, no, it was definitely R2-D2.
Or is it just like, you know, something on your computer where a fake robot comes up?
No, those are clearly people.
Oh, okay.
I love this. This is exactly why I did this research is, you know, there's so many definitions for robot.
And when I talk about robot social engineering, I specifically mean robots that are in a body um and are able to like interact
physically with their environment move around in that environment um and still have some form of
artificial intelligence so the things that pop up on a tv screen would not be uh a robot to me
um so it's kind of interesting to think about how everyone has these different definitions.
I mean, we gave our Roomba the password to our internet.
What? Well, yes.
But it didn't ask.
The app asked. The robot itself did not ask.
It was a proxy. It's not the same kind of thing. What kinds of social engineering can the robots do other than get our Wi-Fi password?
So there are some things where, how do I describe this?
So robot social engineering has a number of parts.
There's, you know, depending on the level of artificial intelligence of the robot. So some things like a Roomba would probably just be
a proxy for a human social engineer that has gone into your robot and uses the emotional
connection you have with your robot to make you do things. So for example,
some Roombas, people actually remodel their entire house so it's more accessible for the Roomba.
And people get these emotional attachments where they name them, where they pet them.
What? People name their Roombas? That's ridiculous. Who would do such a thing?
We have googly eyes on our robots and we've named it.
Right? So you're right in the perfect market. But there are things where, you know, those robots, you start getting used to them and you get used to them moving around first are like, what's that noise when they're moving
around? But after a while, you get used to the noises. Now say that somebody is able to RDP
into your Roomba and all of a sudden start looking around your house because there are cameras on
Roombas. There is LiDAR on some of the Roombas. There's so many different features that can
collect so many different types of data.
And say these robots go throughout your house and you have one of the ones with a camera.
All of a sudden, a person can use the social comfort you have with your Roomba to go around
your home and case your entire house, see everything that's in it, where it is, and
also see or hear whether you're home.
And so all of a sudden, say you go on a vacation for a week, somebody has been hiding out in your
Roomba for a month, RDPing, watching, and they notice you're gone for three days in a row,
which is really weird for you. Well, they know that's the perfect time to come in and rob you.
And where everything is, where the alarm systems are, they might've seen you
arm them or disarm them through the Roomba's camera.
There are all sorts of privacy and security considerations with the technology you let in your home.
There definitely are. Our Amazon Echoes have been irritating me with not only their insistence on offering me things I don't want, but also...
Amazon's new programs to make mesh networks and share your network and do weird things I didn't ask it to do.
Do you think of those as robots?
I mean, because that does have a social engineering aspect as well.
They don't move, though.
I don't because
they don't move. To me, they're just a machine or they're an artificial agent. What if I tape
it to the Roomba? Sorry. Sorry. Why does its ability to move make it more interesting to you?
Because all of a sudden you have a walking, talking vulnerability.
It's not just a thing on your table that you talk to. And that physicality is a big component of how we of research compared to seeing how an artificial intelligence online affects people.
That is a different area that doesn't consider physicality.
It's highly explored, but that doesn't apply to when you have a robot in front of you in a body.
It's a different scenario.
You interact with it differently.
And there is research on how different those two things are. And so I was like, well,
I want to look at this specifically. And so that's why I define robots so narrowly
is because the physicality is really cool.
Does it plug into something deep in our brains that says this is alive,
which is a difference from say the echo tube that is just a monolith?
Yeah, so humans use things like anthropomorphism to connect with different things in their environment as if they were humans or other humanoids.
And then we have zoomorphism, which is when people treat things in their environment like animals.
And robots can benefit from one, the other, or both at the same time,
depending on how you interact with them.
And I think that's really special and cool.
Do we trust them more because we...
I mean, I don't trust people that much,
but I think I might trust a robot more.
I don't know.
Yeah.
We don't give our Wi-Fi password to very many people.
Well, yeah.
Finish your question.
I'm sorry.
It was more like, is it the fact that we do this zoomorphism and anthropomorphism that causes us to be more susceptible to social engineering attacks? Or
is it just that we are so stupidly susceptible to social engineering attacks? This is just one more
path. Both. And some of it has to do with context as well. Like if you have a robot coming up to you
in a hospital, because there are some hospitals that have these robots that will come and deliver your medication to you.
Well, it's a machine in an authoritative role in an environment where most people don't feel they have much authority.
So if a robot comes in with a cup of pills and says, take this, like, you might be more inclined to trust it even though we again have the issue of you don't know
who's programmed those pills if they're the correct pills if your pills got switched with
someone else like there are trust things to think about but because of the context and
authority the robot holds uh some people might be more inclined to trust them. I remember talking to Professor Ayanna Howard about this,
that even if the robot led you in a psych experiment to the incorrect room, and so you
knew that it was fallible, when there was a fake fire alarm, you still followed the robot,
even if you kind of knew how to get out of the building. What's wrong with us as a species?
That is one of my favorite papers. And I did cite that one in my thesis because it was just like
blew my mind that people could see an exit sign clearly pointing, just go left, but the robot was
pointing right. So they went right. And like,
there's so much to think about there. And again, it's context where, you know, people freeze when there is a fire. There's panic. And just like, you know, when you were in public and we see
someone getting hurt and that, you know, maybe somebody should call the cops or should intervene or help out. No one does it.
And there's a bunch of papers on this that no one wants to be the first to step up.
So a robot coming in in this case and being like, hey, follow me to safety. You're like, okay,
I don't have to think about it. Somebody else will think about it. Great. Tell me what to do.
And so I think that's, again, part of the robot slipping into an authoritative position and taking the pressure off of you kind of gives you more of an inclination to trust it. a theme of don't trust the robots, right? In the Alien series and whatever, there's plenty of
examples where the robot turns out to be an enemy for some reason and you shouldn't trust it.
And I feel like that should have subsumed into our culture over the last 50 years,
but it doesn't seem like, that seems like more of a reflection of our desire to trust them rather than a reflection of our distrust.
Well, yeah, and we have for almost every bad robot there is, there's a good robot like C-3PO or R2-D2.
Or in my case, like Bender isn't exactly moral, but I would love him as a best friend.
So yeah, that's the thing is we always match
kind of the good with the bad. And it comes down to, you know, robots are as varied as humans.
They come in so many shapes, so many different types, different thought processes,
different skills. And, you know, when we make a decision on a particular robot, it comes down to, again, context, environment, what the robot does, why we think it does what it does, and all sorts of these complex variables that go into one value of trust. With respect to your example about people being hesitant to intervene in emergencies,
there is also research to show that if you have any training at all,
if people are trained to be responders, not even formally trained, but even the community emergency response teams, where it's a low level of training,
they do tend to step up. Do we need to train people to be like,
no, don't follow the robot? Or how do we get out of this? Because I don't really want to
lose the personability of my Roomba. I like its googly eyes.
Yeah. I'm happy to hear that. That gives me a lot of joy. And I think a lot of what we need to do is just be aware. Be aware that a robot could be collecting information on you and a lot of information that robots don't actually give you all of the details about what a robot is doing or
what it's collecting or where its information goes or how its information is stored. So it's
kind of exhausting, but having the awareness and vigilance to think about robots in depth when you
interact with them is what we need to do. And yeah, training could help with that. I'm hoping that as I write more
papers and give more talks, it kind of gives people that low level of training. But yeah,
it's a really hard problem to defend against robot social engineering, just as it is to defend
against regular social engineering. You said write more papers. You have a master's degree, but you are starting a PhD this
fall? Yeah, I'm actually switching from CS to electrical and computer engineering, where I will
be doing a PhD with a bunch of other great robotics people. And one of them actually started
working on robot social engineering around the same time
I did, but they were living in Italy. So we will actually be in the same place at the same time,
researching the same thing. So I am real excited about the research that's going to come out of
our lab. And is it going to be on robotic social engineering or robot as a whole?
Or do you have an area of concentration?
Yeah, I've been really inspired by Whitney Merrill to focus a lot more on privacy.
And so I've always had this question in my head of like, what are people's public perceptions of robots in public
spaces? And I have a story on that actually. So my partner and I were in an airport and we saw a
robot around and I was like, take my bags, check us in. I must go and look at this robot. And so I
grabbed my phone and I start recording because I'd never seen this robot before. And it had the airport symbol on the side of it. And you could scan your boarding
pass and it would tell you where to go and where to check in or where stores were, or you could
search for restaurants or it would turn its head around and take a selfie for you and email you
the selfie. And I'm like, wait, that robot is collecting a ton of
information. And I'm like, and the only reason people are trusting it is because it has a sticker
on the side of it. And so I'm like, I could just like, I want to see if I could just drop a robot
somewhere, throw a sticker on it and see if people give me their information, you know? So this is a lot of what I want to look at in my PhD is how little context do I
need to give on the robot for people to give it high levels of trust?
Are you going to put stickers that people recognize or things that just look
like something that is trustworthy
all you need are googly eyes i'm telling you that is the lowest bar
um well i'm thinking things like you know i'll be at the university of waterloo so can i throw
university waterloo stickers on a robot have it wander around and ask people to, you know, enter a sweepstakes and get their personal information. Or, you know, if I could remain expensive for a while.
So this level of interaction usually is related to a company, iRobot, the airport, places that are big enough, even Boston Dynamics, they have funding. And so if their robot is wandering around doing nefarious things with people's data, something to make a fuss about, and I think people don't know that they should be making a fuss about some of these robots.
Like there are a few I will absolutely walk the other direction if I ever come into contact with them.
Like what?
Like the Nightscope robot, personally.
They're a security robot.
They look kind of like Daleks. They oh my gosh do they ever i mean is it just because it has a bunch of cameras does it have a laser
does it why are these bad does it have googly eyes no um i'm kind of happy these ones don't have googly eyes because I think it's bad to personify these ones because I don't think they're good for the public.
But if you go through their documentation, you can see that they have thermal imaging, that they have license plate cameras, that they collect the MAC addresses of any devices near them, that they have wireless
access points on them that collect whatever access points your devices are trying to connect to.
So it knows what networks you usually try to connect to. They have cameras, they have audio,
and they're collecting all of this data. And even if a company is the
one that has set this out in their parking lot, you don't have notice that it's collecting all
of this data and you don't know where it's going. And based on the documentation I've read, it looks
like even if you think it's just going to the company, all of the data is going to Nightscope
as well. So all of a sudden you have two companies using your data.
And Nightscope has done things like partnered with various law enforcement agencies
that people don't really respect and that make a lot of people's lives worse.
And it makes me uncomfortable to think that these robots are breaking apart families and things like that.
And wow, I mean, there's so much here.
Elevated body temperature, that is not something I want people to be able to know because that indicates that I'm stressed.
Although, on the other hand, part of me is like, oh yeah, you put that in an airport. If anybody has a fever, maybe we don't spread disease quite as fast.
But the balance there is just impossible.
And these robots, I can, if there isn't already a cell tower in them in order to collect the base cell data, there will be in about a month. It's just, these are the robots. I didn't realize
that I spend so much time thinking about the good robots that I don't really think about
these other kinds because I don't see them very often. And they're mostly used in larger cities and airports.
Do you think we'll see more as time goes on?
Or do you think that there has already been some public backlash and there will continue to be?
Right now, I'm of the opinion that yes, we're getting more and more robots in public, especially over the pandemic. There was kind of an explosion of robot use like you know
people putting robots in public spaces to take your temperature was a thing we saw a lot of over
the last few months and also robots that would be in public spaces asking people to step apart if
you're too close like there are some public parks, I think, in Korea that used these robots to say,
you are standing too close together, please separate. But the thing is the robot couldn't
tell whether you were living together or not and whether it was okay for you to be walking
together. So it's making these crude judgments on what should be allowed in public without actually
knowing the context or the situation or finding out more
but because it was you know a big robot the spot robot and you know they are clunky they are big
they seem like they have authority again they they were given vests and stickers that make them
made uh people know that they were part of the park. And so all of a sudden these robots have authority to tell people living together that have money continue to push the robots
without thinking about the privacy and security and without the average person, you know,
raising their voice against the context in which robots are used.
It sounds like a lot of things are being delegated to them that should not be delegated to them
because they're idiots.
I mean, they're not AIs.
But they say artificial intelligence on the side. Do some of them yeah actually they really do that's ridiculous but okay
but yeah i mean they're they're they're you know they're they're they're heuristic things that just
go out there and say okay here's here's a bunch of people tell them to separate that's all it knows
so it's kind of yeah i mean delegating human decisions to a robot seems very fraught.
And yet, how do you mean?
Which is a place where security is primarily one of the big areas where you want human judgment.
But if we could make ethical AI more of a thing, then maybe the robots would be more fair than humans who have biases.
Okay.
Go ahead.
AIs will always have biases, though, because they're made by people.
But if we start working on that piece, which is a totally separate piece,
that we need our AIs to go through some path of certification towards equity. And I think we will get there
eventually. But
there are benefits to having someone who is less
likely to be cranky
because they're hungry to do some of the enforcement of
parking enforcement. cranky because they're hungry to do some of the enforcement of like traffic like parking
enforcement yeah but yeah i don't know i like having a human who can be responsible for misdeeds
and there are also sometimes like humans let things slide like i could give this person a parking ticket, but eh. And it's those counter positive scenarios,
like opposite from those cranky scenarios that are also human.
But if you have a robot, it's never going to let someone slide.
Like if you're a minute late getting back to your car
and it's normally not when the parking meter people come by,
like you usually slide by a minute, but robots and AIs hold up the rules
because that's all they can do. And they don't know how to let things slide for positive reasons.
And I think that's a show of humanity as well and a show of compassion and something we don't want to lose by delegating things.
I'm still on the fence about that, but I'm willing to go either way.
So as Chris said, one of these looks like a Dalek.
Another one of these looks like the kind of cool robot, I think, from Interstellar.
The square ones?
Yeah.
But as you said, Straith, they are intimidating.
They look authoritative and they are intimidating. It seems like if you really wanted people to be more interactive, you would put a fluffy bunny sticker on it instead of making it look scary.
And that's a big part of robot social engineering. They look scary because they wanted them to look scary or that they look scary because nobody realized that they would be friendlier if you made them look friendlier?
100% both.
I think the night scope robots are definitely intended to look scary.
And it does look exactly like a Dalek.
And I've seen pictures of people taping a plunger and a whisk to them.
So they are definitely meant to look scary.
But a number of other robots, it's just people being like, oh, I think this looks cool.
And then they make it and people are scared of it or don't understand why people are unhappy with it.
Like, why is my robot failing?
I'm like, because it looks like it's going to cut my knees off.
So it's a little difficult. And that's one thing that a lot of robot designers could do better
is hiring human-robot interaction specialists who've done research into the shape, outfits,
heights, and all sorts of other interaction variables
that could help them design their robots better.
But so far, that's not a common thing.
I mean, one of the first bullet points on your robot is force-multiplying physical deterrence.
I don't think that your goal is to make something that looks fun.
Mm-hmm.
I mean, that's going to be a pretty interesting research uh project do you think you'll be able
to get one of these scary-ish robots and see just how friendly you have to make it before people
will interact so there are things like i probably throw an apron on it like a frilly pink apron with
some like flowers on it and all of a sudden it would look more maybe like
rosy from the judsons absolutely and so there's these things like outfits that can make things a
lot easier or um you know just colors like stop choosing scary colors and maybe use more yellows
or purples or things like that too um but yeah, that's not necessarily my specific area of research.
And I would love to talk more with people
who do really focus on that in depth.
Yeah, we don't put enough fur on robots.
I was thinking that if you put a pink leather collar
on one of the Boston Dynamics terrors,
it would be pretty cool.
People would be like,
oh yeah.
See, that's another, okay. Yeah. It's another set of companies. It's like,
are they doing this on purpose? Because these look like hellhounds. I mean.
So you want to see more about people interacting with robots and try to figure out how and abilities robots have, how those
sensors and abilities can be used to collect data, and where their privacy and security might come at
risk. And I want to demonstrate that through using robots to social engineer people.
Do you think that robots that interact with us take more data than just walking through with a cell phone that day. But if a robot is there and it's also got video and audio of you,
you're a little bit more in trouble. Or if it has your body temperature or other things like that,
there's just so many other pieces of data that maybe your phone isn't collecting that robots
are definitely collecting. Well, and people, I mean, at least theoretically are in some control of their cell phone, right? You could leave it at home. You could turn it off. You can adjust the privacy settings and the location tracking settings.
Lead box? you happen to wander into and it's going to collect stuff passable it's the same issue with like facial tracking right and things like that where you don't get to opt out even if it's hard
to opt out there's with a cell phone sometimes there's no option to opt out with something
that's just ambiently in the environment yeah especially when it's walking and moving it could
follow you like if you go around a corner to not be in its cameras
and it follows you because it thinks that's suspicious that adds a lot of stress to your
interactions and walking around and just existing is like if you try to get away from a robot and
it won't leave you alone like how uncomfortable okay so it's really about the robots being able to gather data about us that we don't want,
as opposed to the robots that we find attractive.
That's not the right word, but I'm going to go with it.
Attractive enough to engage in a social manner in which
we give up our data purposefully.
Or get tricked into.
Yeah.
Like privacy is just, you know, being able to control who you give your data to, when
and why.
And so, like was said, like with a cell phone, you control that.
But with these robots, especially in public spaces, they are someone else's property.
And, you know, there is questions about laws.
Like if you put a sticker on a robot to take away some of its abilities, like what laws are you breaking?
So even if you try and do small fixes to increase your own privacy and increase the privacy of others,
what laws are you breaking?
Well, I have removable vinyl, and I could imagine using that to disable cameras,
but that even if it's not permanent, is that some sort of misdemeanor?
Because I'm disabling their ability to track me?
But I never agreed to be tracked.
So why?
There are some pretty gnarly legal aspects here, aren't there?
Mm-hmm.
So you're not starting your PhD program until fall, but you are working now at Great Scott Gadgets. Yep, that's correct. I believe we talked
to Kate Tempkin from there about USB things earlier in the year or possibly last year,
possibly a decade ago. I don't remember. What do you do there? I am the community manager. So I have the fun job of dealing with all of the GitHub tickets,
interacting, you know, being first point of contact for anyone who wants to
talk to Great Scott Gadgets or get customer support help. I also will eventually help run events when we're out of the pandemic
and give talks and focus on giveaways. And I'll also be making swag, including stickers.
What are you going to do with your stickers? What are the things that you find most important
that you're like, okay, on a sticker I do, it's going to have these things?
Well, so it's a little bit different for things I would create my own time versus things I would
make for a company. But I really love a lot of the great Scott Gadget's ethos, which is making
everything as transparent and open as possible. Even the tagline for the company is making open source tools for
innovative people. And I just, I love how freely the company shares knowledge. And one of my
favorite things is looking at the different layouts for all the different pieces of hardware.
And I really want to make some stickers and t-shirts and stuff that really like show that hardware since, you know, it's open,
it's available, but I think, uh, throwing that on a black t-shirt looks really cool.
As part of being community manager, you, you deal with GitHub issues. Do you also
work with software engineers who are contributing, uh contributing to the open source parts of Great Scott Gadgets?
Yeah, absolutely.
So I help review pull requests.
I try and ask questions to make sure we're talking about the same thing.
And a lot of the GitHub issues end up turning into, you know, maybe you could try and fix this.
And people submitting their first pull request. So I definitely love when people open issues and pull requests because it
gives me another opportunity to interact with and support people and see new ways to use the things
that we've already made. Getting people to do open source, it's still hard. How do you get them to engage?
One of the things that I've been trying to do in the company is respond to issues quicker. I've tried to put a service level agreement in that I'm going to respond quick so people know that we care and that we want to hear their feedback. We want to hear what they're having issues with so we can make everything better.
So I would count issues as contributing to open source because it does affect how we think about what we're making, they normally think of only contributing code.
But contributing documentation, writing down the issues, joining the Discord and interacting with us and telling us what you want is always to contribute to open source. And I really want people to focus on some of these other things other than contributing just software,
because there are so many ways to be involved. It seems like there's a real mix in the quality of
various open source projects and how welcoming they are. What do you think are some of the keys
to getting people to feel comfortable submitting a first PR or an issue or something without
fearing that they're going to be yelled at or made fun of because that happens sometimes.
Yeah, I've definitely been in that scenario. And it's what turned me off of open source for so
long was, you know, people just being outright rude or saying like, this is too simple of an
issue to contribute.
We're not going to merge someone else's stuff when we can do this in a line.
Like it was just antagonistic and rude. everyone constantly all over the world at all these places and that they were giving back
through talks, through giving away hardware, through making sure everything was open source
and really touting that and having tons of videos and write-ups and stuff out there for people to
learn from it. It was obvious that it was more than just a pet project. It was
kind of like a labor of love. And so that's one of the reasons I joined the company was just
how positive all of these things were. And so that's something I look for when I try to
contribute to open source now is how much are they giving back to the community and accepting the community? I was talking to a person who wanted to contribute to
an open source project, a big one, not Linux, but a big one, and didn't understand why no one would help set up the computer in the right way.
And I was like, okay, they have a getting started guide.
I don't understand what your problem is.
And the problem was that it took a long time to compile,
and they wanted help understanding how to get faster compiles.
And I tried to explain that they have a lot of people
who want to do a small change, who want to do a small change. Not a lot of people who actually
do a pull request, but too many people to help every single one just get set up.
Do you see that problem? I'm not sure I handled it well, other than trying to
explain from the other side. Is there some way to say, yes, we really want you to contribute,
but could you please not waste our time? That is something, honestly, I've been struggling with now that I am first point of contact for the GitHub issues is that before me, there wasn't anyone dedicated to this.
It was software engineers who had a bit of spare time or other things like that, but it slows down the open source project every time we need to help
someone. And so I'm really happy that Great Scott Gadgets grew enough that they were able to pay and
hire me because the company's doing well enough. So now I can go in and do that. But not every
open source company can afford to hire someone. And so another way to contribute
to open source and help other people is to be part of the community and watch projects that you like
for how they do try to solve issues and try to help other people. And so right now I'm doing that
by looking at old issues and how they were solved, seeing if new issues match up and being like,
okay, well, we've tried these sort of solutions in the past on these issues.
Let's try that again over here.
And using those issues and going back and forth is one way people can contribute to
open source, help other people, free up open source developers time to do other things.
So it's definitely hard. And every time somebody
asks an issue, it is taking up time. But hopefully, you know, contributing to projects either with
money or buying the products they're related to can help open source companies hire more people
that can do this type of work. I guess probably because of being around with the dinosaurs,
I still don't understand how open source companies can make money. Do you have any insight into that?
Yeah. So one of the things with Great Scott Gadgets is that we do sell products like the
HackRF or the Ubertooth, or soon we'll be coming out with Luna, which is something that Kate Temkin's been putting a lot of time into.
And it's that hardware that really supports the company.
And so anytime anyone buys one of the actual Great Scott Gadgets pieces of hardware, it is funding things like creating new hardware or helping with these pull requests
or GitHub issues. And so that's the number one way people can support us. And we hope that our
hardware is what people buy instead of the knockoffs because of the customer support we provide and some of the guarantees that our resellers provide as well.
And so, you know, it's part of that ecosystem and giving back to the community. And if people
want to give back to our company, buy our hardware. It's really helpful.
And this Luna board, looking at it, it does protocol analysis for USB.
It works on creating your own USB device.
It has an FPGA to help all of this.
I wouldn't want to create that board.
I mean, if I wanted to use that board, I would not first want to build it.
I would just want to use it. So I understand
why people would buy the hardware instead of making it. And then that pays partially for the
software as well as the hardware. Cool. I can understand that. Yeah. And hopefully another
part of my job since I've come onto the company is creating interesting types of swag, which hopefully we will give some away at conferences, but maybe some people might be interested in buying.
So hopefully I will get all of that up and running soon and people have an option to support us by buying cool things that aren't just hardware. Well, as long as you're going to say that, I should point out that we have new swag and new merch in our Zazzle store.
I did this talk with the map file that talks about map files.
And now you can get somebody actually printed it in a poster
and sent me a picture of it on their wall,
and somebody asked for mouse pads,
and so I went ahead and made mouse pads and new mugs.
People wanting to buy this stuff is really weird.
I thought you had to give this away.
I didn't realize people would buy it.
It's a great opportunity to support the things you love.
Yeah, it is. Even if you're not trying to make a lot of money off of it,
it's also a way, as you said, with stickers, somebody will come in and say, what's that?
And if you've got a neat mug or an embedded sticker or poster, I can totally see it.
It's kind of cool.
Well, I think it's about time to get back to our weekend.
Do you have any thoughts you'd like to leave us with?
On the topic of open source, contribute.
I've been doing so much for the Great Scott Gadgets repositories
and I would love to see people open more issues, open more pull requests, or even reach out to me if you want to talk to me about anything Great Scott Gadgets.
I am always here and happy to hear from new people.
Our guest has been Straith, Brittany Posnikoff.
You can find her at straith.com.
That is S-T-R-A-I-T-H-E.com. And of course,
there'll be a link in the show notes. Thanks, Straith.
Thank you.
Thank you to Christopher for producing and co-hosting. And thank you for listening. You can always contact us at showandembeddedfm or hit the contact link on embedded fm and now a quote
to leave you with this is from william gibson time moves in one direction memory another we are that
strange species that constructs artifacts intended to counter the natural flow of forgetting