Behind The Tech with Kevin Scott - danah boyd: Researcher, activist, tech scholar
Episode Date: October 24, 2019danah is a partner researcher at Microsoft Research, and founder of Data & Society. Her work examines where technology and society intersect. Kevin and danah discuss the dangers of a “move fast and ...break things” culture. Today, her research focuses on reducing weaknesses in sociotechnical systems.
Transcript
Discussion (0)
The only reason I applied was that this boy in my class told me that it was a program for men, it was not for girls.
And I was like, I'm going to go.
It was very simple, and that kept happening, where when people told me I didn't belong, I just became determined to stay.
Hi, everyone. Welcome to Behind the Tech. I'm your host, Kevin Scott, Chief Technology Officer
for Microsoft. In this podcast, we're going to get behind the tech. We'll talk with some of the
people who've made our modern tech world possible and understand what motivated them to create what they did. So join me to maybe learn a little bit
about the history of computing and get a few behind-the-scenes insights into what's happening
today. Stick around. Hello, and welcome back to Behind the Tech. I'm Christina Warren,
Senior Cloud Advocate at Microsoft. And I'm Kevin Scott.
Today, our guest is Dana Boyd,
Partner Researcher at Microsoft
and Founder and President of Data & Society.
You know, I've been,
I'm so excited about this conversation.
I have been a fan of Dana's and I've looked up to her,
I think I was in high school when I started reading her blog.
She's amazing and watching her career
and seeing all the amazing things that she's written,
things that she's done.
I really can't wait to hear what you two talk about.
Yeah, she really is one of my favorite people in the world.
She's just genuinely a fantastic human being.
She's so smart.
She's such a good computer scientist. to study the intersection of very difficult, very complicated high technology and society
and how those technologies both influence and impact society and how society should
be more sort of forcefully informing the things that we're doing in tech.
So it's just sort of fantastic to have someone like Dana thinking these thoughts and driving these conversations.
And I'm really looking forward to chatting with her today.
Well, let's get to it.
All right. Let's chat with Dana.
So our guest today is Dana Boyd.
Dana is a tech scholar and researcher who looks at the intersection of people, social practices, and technology.
She's a partner researcher at Microsoft and founder and president of Data & Society, a nonprofit, NYC-based think tank.
She's also a visiting professor at New York University's Interactive Telecommunications Program.
She's one of my favorite people.
Welcome to the show, Dana.
Thanks for having me.
So I would love – you have such an interesting job and you've had such an interesting career.
I really would love to understand a little bit about how you got started.
Like were you a techie kid? Well, so it's a sort of funny path on how I came into it, which is that I was a geeky kid, by which that meant I did a lot of math.
And I didn't really fit in because I was geeky. I grew up in Pennsylvania, you know, outside of major cities. And I, you know, single mom, you know,
working multiple jobs, trying to make ends meet. And my brother, who was born very early, had
visual issues. And so one of the doctors at one point said, you need to work on eye-hand
coordination. There's these things called video games. And so she bought him a video game player,
and I know there were Nintendos.
I don't know if that was the first one.
And so we would play all of these games, and it was fun.
And my brother really got into it and wanted to know how it worked.
And so he became obsessed with reverse engineering all of these different systems.
He ended up building his own computer,
and I pretty much ignored him, as a big sister does.
And then he started using the phone line,
which was just an unforgivable sin.
And it was making beepy sounds, and I didn't understand the beepy sounds.
And one day I marched into what he was doing.
I was like, what is this?
And he's like, I'm talking to people.
I'm like, what?
That is crazy.
And so he showed me Usenet.
And it's always the rest is history.
So I spent my, you know, high school years on various fora, on Usenet, building my first website as an homage to Anita Franco on all of her lyrics.
This was definitely the days of lyric sites.
And I went to college.
Well, I guess first I went to this amazing program
called Pennsylvania Governor's School,
which took smart kids around Pennsylvania
and sent them to Cardi and Melon for the summer.
Oh, wow.
And I was way over my head
because I did not have the education that most of those folks had.
And I felt very unqualified to be there.
But I learned a lot.
And it was also where I was exposed to what elite universities could look like. persist there because I think it's not actually an uncommon thing for folks to have their first
exposure to like very high level computer science or mathematics or science education seem like
really intimidating and overwhelming. And like your first assumption is like, oh, I am the unusual
thing in this situation. And like some people give up because it's hard and intimidating. And
some people say, nope, I'm going to get on top of this. Like what made you?
So for better or worse, I always responded to being told I couldn't do something by determination
that I was going to prove you wrong. This got me kicked out of elementary school in the fourth grade, and where I staged a protest
against my teacher's inability to teach and made everybody signs to have them walk in circles.
That did not go over well. And so even when I went to governor's school, you know, the only reason I
applied was that this boy in my class told me that it was a program for men.
It was not for girls.
And I was like, I'm going to go.
It was very simple.
And that kept happening where when people told me I didn't belong,
I just became determined to stay.
And that fire and that fight has done me well, but it's also costly.
And I think that tension to me is where things get difficult. I mean, my freshman year of college,
I went to college. Where'd you go? Brown University, which I knew nothing about except
that they didn't have grades, which is kind of startling to think about at this point in my life. And I was assigned on the first day a supervisor, somebody to mentor me.
And I got this piece of paper, and it said Andy Van Dam, and it gave his office.
I was like, all right, I don't know who you are.
And you had no idea who Andy Van Dam was?
I had no idea who he was.
And so I marched into his office.
The door was open, so I just walked in.
And he looked up at his desk and goes, who the hell are you?
And I'm like, it says I'm your student, and I need advice.
And he's like, why are you in my office?
I was like, uh, hello.
And from that point on, I mean, Andy took me under the wing.
And Andy was such a key mentor.
And in that way where, you know, it was obviously
it was a different era. There was my way of speaking was very much street language. It was
very much working class street language. And so there was a period of time where he decided he
was going to teach me to speak like an adult. And he would just hit me upside the head every time I
said something wholly inappropriate. And I was like, and again, it looks a little different, you know, saying it now, but it was such love and such care.
And my freshman year in college, a group of people, I assumed to be a group, accessed a computer. And that computer was a Unix machine
and DevKM,
which is basically
the period piece
which would tell you
which user
was world writable.
And so,
somebody sat
and turned it into
the admin position
and then went into my account
and hacked
all of my emails, all my correspondences,
which included a ton of conversations about grappling with sexual abuse,
grappling with a mess that I was in with my family, all this stuff.
And they posted it to an anonymous server.
Dear Lord.
This is 1997.
And, like, why?
Other than they were assholes?
That year there was a whole group of us who were women, and we were regularly told we didn't belong.
And it was, I mean, oh, was that hard.
And I was wrecked.
And, you know, Andy had my back.
And Andy was like, you know, let's investigate and prosecute. And what I learned was that
it was, again, laws were different at the time. For every email stolen was equivalent to a mail
stolen. So it was 30 years. And I was like, no matter how horrible this is, that's not the
punitive result that I want from this. And so I, you know, spent a lot of time trying to figure out how I would position myself within computer science to be like, I'm staying.
I'm not going away.
But it was a hard slog.
And, I mean, Kevin, the number of stories I can tell you of people trying to get me out, right? Like, you know, I remember applying for a job at Silicon Graphics, SGI, and
I remember this interview with this guy who was like, oh, I thought they managed to get you out
by now. And I was like, what? And like, this was an alumni of Brown. And I remember, you know,
applying for a job at Disney when I was at SIGGRAPH
and walking up to this recruiter and saying, you know what, I want an internship. And he's like,
well, we don't have internships for artists. And I'm like, well, I'm a computer scientist.
And they're like, but you're a girl. Right? And it was like that constant story. And I was,
you know, and I just, I dug my feet in and I became determined so much so, to be honest, that I stopped realizing what I liked about computer science and what I didn't.
And I was just so determined to do computer science because people told me I couldn't that I wasn't willing to look at the broader field, if you will.
I mean, I graduated Brown with very few classes that weren't computer science or math, because you can at Brown.
And it took me many more years to be like, no, actually what I love about computing is not just doing software development.
I like thinking about how these systems are built coherently, how they fit into broader social issues.
So it took me a long time, and it was really about Andy that made me go back.
I had dropped out of my first Ph.D. program, so I went back,
and Andy introduced me to basically work with anthropologists.
And I was like, I mean, I knew nothing about anthropology.
I was like, what? How does this relate at all?
And it was just this moment of like, oh, studying peoples and cultures and practices could give me a different way into it all.
Interesting.
And, you know, it's – I think it's – you certainly had a really interesting career because you pushed yourself or were pushed in this particular direction. You know, if anything, I think the world needs a lot more folks like you
who are looking at computing and the context surrounding it
and, like, what space it occupies in society
and, like, what all of this sort of complicated set of entanglements we have now
between, like, this thing computing that I think we still think a lot as if it's this like monolith that
stands on its own versus like this enormous impact that it has in the world.
Well, I'm really grateful that I can be a hybrid in this because that computer science degree has
done me so many levels of good. Not the least of it is it provided a form of security
that I didn't have growing up,
and I couldn't even imagine this moment
where I could take risks because I had in my head,
I can always get a job.
And that's one of the things with the software industry
in particular growing was I can always get a job.
Now, of course, the idea of me doing software development
right now is pretty laughable,
but it provided a frame that helped me actually take
risk. And it also allows me these moments in my career where I keep coming back to it. I'm working
on this project right now with the Census Bureau, the U.S. Census Bureau. And part of it is that
they're going to implement the largest instantiation of differential privacy that we have ever seen. And it is so powerful, and they're doing it for all the right reasons.
And it is complicated and challenging.
And the number of people who know what differential privacy is, you know, can be counted on my hands and toes.
So for the listeners, what is differential privacy?
So if you think about the census data, they provide tabulations, these files that tell you how many people are in a block and different things about race or age, sex.
And they give it to you in a way that they don't want information to be uniquely identified.
It's not meant to be personally identifiable.
It's meant to provide statistical information. The difficulty is that computing has
advanced, and it is not hard right now with the amount of data that an organization like the
Census Bureau puts out to reconstruct individual entries out of these tabulated forms. So in other
words, you can take all the census data, and you can imagine how, you know, what are individuals.
And with that, you can match
that against commercial data. And in matching it against commercial data, you can re-identify people.
Now, the key to the census is confidentiality. The public needs to be confident that their data is
not going to be abused. And there are horrible points in history where it was abused. And we've
worked really hard to shore up those laws. Well, the difficulty is that when you break a technical system through a technical means,
you can build laws around it, but they're not going to necessarily solve the most important
thing, which is that we have to make certain that we can, you know, say to the public,
this data is confidential. So, you know, over a decade ago, the Census Bureau began
looking at differential
privacy. And why differential privacy is important and what it means is that it's an attempt of not
allow that reconstruction. So, in order to do that, it takes tabulated data and it inserts noise.
And that noise is not just, you know, arbitrary. That noise is very much mathematically dictated
based on a set of priorities and values.
So how much confidentiality do you want to guarantee? In there, it's referred to as privacy.
How much accuracy is important within the system, and how do you balance between these?
And where can you allocate that? And so what differential privacy is, it's a mathematical definition of privacy that looks at those values and assesses whether or not an implementation of noise
meets these standards so that you can't allow for reconstruction.
So, how do we have, this is like one of the things that I'm sort of obsessed with right now,
is figuring out how we can have informed, transparent, public debate about
these sorts of things when you have technical concepts like differential privacy that are
very complicated, require high levels of expertise to implement, but not necessarily to talk about.
But, you know, as you mentioned,
there are just a handful of people in the world who know what differential privacy is,
how it can be applied, how to reason about it.
So how do we bridge that gap?
Because it's getting more and more important right now.
And we could probably pick a dozen different topics
in the next 60 seconds that have that quality.
DeepFix is one.
It's just crazy how this complicated technology is impacting our civic life
and how difficult it is at the moment to have informed public debate about these things.
I think what it comes to is that whether the tech industry likes it or not, we have become in the business of governing.
We're governing information.
We're governing communication.
We're governing how and where people interact.
And that's terrifying because we didn't build these systems with that in mind.
We built the systems imagining we could connect people,
imagining that we could use data for good.
We were quite naive.
And that dynamic of governance that we're facing
is where this question of how do you have a public debate come in.
So the difficulty
with something like differential privacy is it's a governance mechanism. It's trying to take two
competing values, right, the value of confidentiality or privacy and the values of public access to data
and trying to contend with that. And who does make that decision, right? And that is not clear cut.
That is not clear cut even if you want to just allocate it to the U.S. government.
So what happens is this moment of like how do we reckon with accountability here?
And I feel as though we're in this, you know, I keep thinking of it as like a great reckoning.
We're in this weird moment in time where we're waking up to the power that we've
built through these technologies. And we're mostly in the, like, oh, shit stage. We haven't really
gotten beyond that. But we're trying to now say, okay, what does governance look like? Does it
mean reverting to nation-state structures? Those are modern, you know, inventions. They're not
long-standing. And they were designed with very specific ideas of spatiality and economics at play. Does it mean,
you know, building a whole new framework that, you know, is built on different set of values?
Probably, but defined by whom, right? And what we're seeing, of course, is a scramble for power
to do that. And that's unsurprising. That always occurs. But then, of course, is a scramble for power to do that. And that's unsurprising. That always
occurs. But then, of course, what's the relationship between a governance structure of something like,
you know, technology and nation-state governance? Because if it's built separately, we have a whole
different set of challenges that we're going to face. So this is the mess I think that we're in.
And that's where, for me, when I go back to thinking about how do you have the public debate, the question is to what end, right? And so I don't
think that the public needs to debate differential privacy to debate differential privacy. That
doesn't really matter. What they need to be in conversation with is, you know, what does
confidentiality or privacy mean today? Who decides under what conditions?
What protects people?
I think they need to be in conversation about what does it mean to have data for decision-making?
Who should have the power to have access to that data?
And what decisions should you be making?
And what decisions should you be making, right?
And where can we stop and say, we do not have enough information, so you cannot implement yet?
Right?
Like a classic example for me is, you know, criminal justice.
A criminal justice system is rooted in the, you know, the first sin of this country.
It is racism and slavery through and through. so flawed that to think that we can do analytics on top of that data without contending with the problems of that data is painfully naive.
And so that moment of just like, well, we're going to clean up the racism problem, the
longstanding inequities problem by using analytics, I'm like, this whole system is corrupt.
And if we get into that business without contending with those power dynamics, we just become part of the problem.
And that's why for me, like, I look at all of this and I say, we have a lot of challenges in front of us.
How do we have the higher order conversations?
And then who needs to be involved in the technical details?
And then what kinds of accountability
do those people have?
I do feel like we, and I've written a book largely because of this, that there is some
baseline level of conceptual understanding of the technology that I think everybody has
to understand.
You know, we decided at the beginning of the second industrial revolution, at the beginning of the 20th century, that everybody needed to be literate and everyone needed to be able
to do, you know, arithmetic up to some particular level of proficiency.
And I think there's a new set of technical proficiencies
that everyone needs to be comfortable with
in order to just be a citizen of the modern world.
Absolutely.
And I believe some forms of technical computational literacy
are absolutely essential.
But to assume that everybody's going to get
at the level of a best-selling author is also naive, right?
So the key for me is how do we give people the information and you know, we're at a point in the United States where
we think that, you know, the key to retirement is everybody controlling their own financial futures.
You know, there are certain people who can actually, you know, have more information,
more power, more knowledge, more skill to be able to make bets around finances than others.
So when people don't have those skills, we're going to leave them out to dry? Like, why is that the logic?
Or if you don't know all the medical details about your cancer, you can't get a meaningful second opinion?
That's terrifying.
So part of it is we need that generalized literacy so that we can have sophisticated conversations.
And we need expertise.
And that's the interesting dance that we're not, you know, in a moment where we're obsessed with individuals, we're putting a lot of burden on individuals to know all the things.
And that's, in my opinion, unfair.
Yeah.
And I know even in the tech industry, like if you look at what you contend with on a daily basis as a technical person, like what I try to do in my job, like there is no such thing as being able to know everything.
Like it's just, it's preposterous.
Right.
Like, even if you narrow down to something
and say, like, oh, AI machine learning,
like, it's just crazy.
The big deep learning conference,
NeurIPS, just had its program decided.
And I think, you know, they had something,
I'll get the number wrong, but it's something on the order of 2,000 papers were submitted to this thing.
And they're all – the submissions are all high-quality technical papers.
20 or 21 percent of them got accepted. And, like, there's no even single machine learning practitioner who's going to read that entire program and understand all of those papers.
And that's just a tiny fraction of one year worth of the collective research output out at the frontier of machine learning.
Right. Like, we all have to, you know, rely on each other to a certain extent for expertise.
And, like, the, you know, the trick, I think, in, you know, modern civil society is, like, figuring out how we can really figure out how to depend on one another so that we can each rely on those expertise.
And, like, everything sort of adds up to, you know, something that at least is the sum of its parts.
Well, it's also where I, you know, I'm a firm believer in augmentation.
So, you know, take, for example, the parallel of what's happening in the medical industry.
Medical research is advancing far faster than any doctor could possibly read, especially
if they're actually a practitioner.
So how do we give them the tools to be able to, you know, consume all of that information,
augment their knowledge, search
effectively, et cetera.
That's a power of technology.
So you want two things there.
You want the ability for the networks of doctors to be able to riff with each other, bounce
off of each other, understand differences, and you want their augmentation.
And so I'm living this.
I'm on the board of an organization called Crisis Text Line.
And Crisis Text Line is a service that allows people to text in via their phones when they're in a crisis and they need support.
And they communicate with trained counselors who are mostly volunteer who help them through, right, and, you know, get them support in different ways.
You know, when we're dealing with, you know, an act of suicide, make certain that
they are safe. And what's powerful about this work is because it's all in text, it's left traces.
And so we're having this crazy conversation because what, you know, mental health institution
has, you know, a million conversations. What single, you know, psychologist has had a million
conversations that they can draw on to make decisions? So we're trying to figure out how do
we augment all those counselors? First, we have them coordinated within, you know, an environment
where they can talk. So they can say, I'm seeing this. This is the conversation I'm having. Does
anybody have any ideas of how to move forward? And they're able to pull on the corpus of data to try to pull out information. What could be a next move? And again, it's not
about decision-making. It's not about automating interventions. It's about empowering human
decision-makers, human interactants who can actually go out there and be like, I am going
to help you with as many tools in my pocket as I can possibly pull on right now to make a
difference. But, you know, again, I think that's a really good illustration of this quandary we've
got with data where, like, I can imagine all sorts of things where you use the data in the right way
and you're literally going to save lives and, like, have this huge positive impact in people's lives.
If it gets used the wrong way, like, you know, if it gets disclosed, like, it compromises the trust in the system and then people stop using the system.
Like, if it, you know, you don't want people making employment decisions based on this
or, like, insurance, access to insurance decisions.
And so, you know, I think it's one of those places where the burden is just incredibly
high right now on getting everything right about how you handle this data.
And it's tough.
Well, and that's where, you know, the way that I've been thinking about it is to imagine
that the data that we are all consuming when we're trying to build these systems
is a build-the-data infrastructure layer, right?
We can call it data lakes, we can call it whatever, but it's infrastructure.
And the thing we know about infrastructure is that infrastructure grows brittle over time
if it's not maintained.
It has vulnerabilities in it by its very nature.
And we need to treat it with that level of respect where we have to acknowledge that. The other thing is that once the decision-making or the systems become powerful on
top of that data infrastructure, as with everything else, they become vulnerable to different kinds of
exploits, right? They become targeted, which means that in my mind, we have to start bringing in more
of a security framework and not just about access to data.
And this is where I think when we talk about these issues with regard to privacy,
we often think that if we just limit the access to the data, no harm will happen.
Well, that data affects those models.
That data has all of these different ripple effects, and it affects decision-making.
And so one of the things that I am deeply passionate about is how do we reimagine a socio-technical security?
How do we think about how it will be exploited?
And I started out my career in graphics, and I still remember the first quality assurance engineer that I worked with.
And what was so amazing about her was that she could think through all of the ways that my code could be destroyed.
And it was an amazing collaboration because I was trying to build the thing,
and she was trying to tear it down. And together, we were able to build stronger systems.
Yep.
And when we built a world of the perpetual beta, we lost quality assurance.
And so whether we're talking about safety or security, we need a narrative of quality assurance that comes back and really brings into the fold people who are trying to destroy the system in order to make it better.
And that kind of – everything from threat modeling to red teaming to quality assurance to different forms of bug bounties, they need to go beyond what we normally talk about in cyber into a much more rich environment because when people can exploit a search engine by just certain kinds of strategic queries, not to mention SEO, we're not actually thinking right if we think we can just consume clean data and spit out answers.
We need to think that through. Yeah. And, you know, it's an interesting set of problems because I think you nailed it on the head when you said that it's literally
impossible. It's probably, you know, some sort of undecidability thing even for us to be able to
like completely think of every possible bad use that could come of a software system
or like a set of data that we are, that we're gathering. And I've operated a bunch of large
scale systems in the past. And like the thing that you try to do with these things to make them
robust is you do as much as you possibly can, like using every
trick in the book and you sort of come at things from a gazillion different perspectives to try to
stress test it before you start exposing it to the real world. And then once it's exposed to
the real world, like you sort of continue measuring all of these signals that are sort
of indicative of bad things happening or about to happen.
And you try to build these systems in a way where they're as agile as humanly possible,
where as soon as something bad happens, like you can very quickly get the problem sorted out.
And I sort of feel like we need to, like we know, like if you're talking about a web service,
like a thing that renders a web page or like a screen on a mobile application to a billion users, like we know pretty well how to make sure that that thing
is reliable enough and the user experience renders in the proper way. And we've had,
what, 30 years, 20 good solid years of figuring out what that bag of tricks is for making those things robust.
I sort of feel like we're in the early days of figuring out what our bag of tricks are for making
the sort of data interactions, like data handling stuff as robust as it needs to be.
Well, I think robust is the right language because so much of what I hear,
especially, you know, in Washington, D.C.,
tech needs to fix it.
It's like, there isn't a fix.
There is resilience.
There is process that can allow us
to respond and iterate and evolve.
And that's why I, you know,
often use things like security as a framework
because security doesn't assume anything is ever secure.
It's like the parallel.
I grew up and there was this weird notion of safe sex.
And it was always like, what are you talking about?
There is safer sex, but there is no safe sex.
And so we have to move away from this like we can fix it, we can make it so that nothing bad will happen into a moment of
saying, no, we need to take as much steps as possible to limit the possibility of harm.
And then we need to grapple with what happens as it unfolds. And I worry because in a culture of
move fast and break things, we don't allow for the possibility that the breaking sometimes is so destructive that we need to step back.
Correct.
And, you know, there's a lot in the move fast where it's like, actually, sometimes slow is better.
Slow food is better.
Right?
There's a moment where you're like, come on, let's make things that are healthier.
Let's make things that are richer that people have more enjoyable experiences with. Yeah. And so, you know, I don't want just the fast system because it's, you know, it's what somebody built yesterday that they thought was cool if it's going to tear down democracy.
Yeah.
And that's what I worry about is, like, you know, how do we get those processes of resilience into every stage?
And, you know, I think about those days of CD-ROMs, right?
Your next possible round was the next rev of it.
You know, anything that went wrong, the best you could do was recall.
We're no longer in that stage.
And so that means that the responsibility to be responsive is so much greater.
And I will say, you know, as an industry, I think we are responsible for figuring out ways of building resilience.
And I think that the ways that we're expected to have constant economic growth at whatever cost
is just fundamentally dangerous. And so I hope that we'll have that moment where we step back
and say, you know what? It is worth it to spend resources to actually make certain that we don't mess this up.
Yeah, I completely agree with you.
And I think there are ways where you can get a really good, healthy balance where the tradeoffs between the two, I think, are less, you know, like progress versus resilience, robustness, safety,
like all of the things that we really care about.
Like you can strike a good balance between the two where you're much better off pushing unilaterally in one direction or the other.
Absolutely.
And I think that the other thing that we have to hold on to is the idea that it's not just
the decision-making within one institution.
It's actually how it's all connected, right?
So if I even look at the most troubling exploits going on right now, they're not in one institution.
They're sitting between.
So it's the idea that people are manipulating Wikipedia in order to target search engines,
right?
Like, you're like, okay, who's responsible there?
Yes, Google and Bing have serious responsibilities there.
Yes, Wikipedia has responsibilities, but where else?
And where is it when that exploit also leverages, say, the news media, right, a totally different
sector?
And so, you know, when I look at incidences that we keep seeing where, you know, media manipulators and extremists target the news media
in order to target, you know, search engines, in order to make these two steps.
Like, you know, this has become this moment of like, oh, it's news media's fault.
It's tech industry's fault.
It's like, guys, we have a system here.
And so, like, that other big challenge for me is how do you build a resilient system with unequal resources, right?
Because the news media has a very different amount of resources than the tech industry does.
At the same time, it has a different kind of power in this.
And that's what concerns me is that these challenges for me with progress, they have to happen in this collective fashion that's also we're not really structured well to do. And that,
it concerns me because those transitions can be so costly.
Yeah. Well, and we've sort of understood this for many, many years with security. It's
unusual to like have like an effective security attack be like a single monolithic target.
It's usually a much more complicated thing that people do that sort of, like you said,
push on the cracks between the systems or at the interfaces between them.
And they use one slightly vulnerable thing to attack another slightly vulnerable thing.
And that adds up to the security incident.
And human imperfections are part of that equation the whole way through.
Oh, totally.
And that's also—
Yeah, usually they're the biggest part of the problem.
And that's where, when we deal with these large sociotechnical systems, we deal with
human imperfections at every stage, right?
Human imperfections in decision-making and logic,
say, for example, the news media.
Human failures with regard to the public
who might be consuming content.
Human failures even in the design building of systems, right?
Like, what code is bug-free?
All of these layers.
And that's where the interconnected nature of our society
creates a form of fragility that, you know, I think we're
all sort of feeling and shaking about right now because so many of our institutions, so many of
our systems are just barely working. It's like, you know, it's like duct tape nation, right? And
what does it mean that we're all kind of hoping that the duct tape won't fall off before we get through this.
Well, and I think, too, the thing that I personally am trying to push for and that I sort of hope for,
both as a person who is in tech, who I'm hopeful about the future of a world with even more tech than we've got right now. Like, I'm, like, very appreciative of what tech has done for me and my family
and for, you know, like, whenever I do the exercise of sort of imagining, like,
what I would have done as a kid in today's tech world.
It, like, looks so much better than, like, what I had as, like, a poor kid growing up in rural central Virginia.
But, like, what I want, you know, both as, like, a tech person and as a citizen of the world is given, like, the multidimensional, multi-party nature of the problem that we've got right now. I
really hope that we can get
away from these sort of reductive
arguments about
this one thing is the
problem. And I think we need
a lot more dialogue and
collaboration that we're having right now to
get to a better state.
When I think about this in terms of,
this is why I keep going back to governance,
it's like, what is the KPI?
What's the key performance indicator, right?
What is it that we're all working towards?
And when it is too narrow of a KPI,
you get lost and you don't think about
the unintended consequences.
When the externalities are built into your KPI
so that you're measuring
the right thing, then we can start to talk about forms of moving in a positive direction.
Like you, tech shaped every aspect of my life. And I firmly believe that it can be a positive
force in this world in many, many ways. I also believe it's a tool that can be weaponized. And so the key for
me is how to resist that weaponization as much as possible while creating the right spaces and
structures for, you know, those possibilities to get realized without getting twisted and perverted.
Yep.
And so, you know, for me, a lot of it requires stepping back and being like, what values are we working towards?
You know, what is that core commitment?
Do we want the world to be more connected?
What does that mean?
What is it we're trying to hope for?
Do we want to address disease?
For whom?
And what does that mean?
And who will benefit?
And how do we think about that power? And I think this is the weird transformation in both of our lives,
which is that the tech industry when we were kids, like, it was geeky,
and it was not nearly as powerful as it is now.
And many of us of, you know, our cohort and older still haven't fully gotten their head around
how much power this sector and the tools we've built now have.
And the idea that we have this much power means that it can be perverted so much easier.
And so that's one of the reasons why I think we all who love tech have a responsibility
to figure out how to stop that and how to drive towards agendas that are inclusive, that are really actually helping, you know, rise up, you know, the next young Kevins.
Like, that are actually giving the opportunities for so many people out there rather than becoming a new form of status quo exclusionary.
Yep.
So, let's switch gears just a little bit. What, if anything, are you seeing right now in the tech world or at sort of the intersection of tech and society that's interesting and hopeful?
So right now, I have to admit, what I'm seeing mostly is a reckoning.
But I think it's cyclical.
That's temporal.
So it's not about the technologies itself.
It's more like, oh, boy, what did we do?
And I'm seeing people wake up and take that seriously.
And that gives me a lot of hope and a lot of excitement because I do believe that innovation comes out of reckoning.
That's a process.
So I don't feel we're at that moment where I'm like, oh, here's the new thing.
What I'm hopeful for is, and there's, like, small glimmers of it, is the various folks who are really starting to grapple with climate and tech and those intersections, both in the ability to understand how to reduce the cost to climate for our technology, but also the possibilities that we can model, understand, and innovate.
Because we have a big, heady challenge in front of us on that front.
But that's, like, those are the, like, glimmer stages as opposed to, like, here's where we
have tools.
There's so much opportunity there.
I mean, it's unbelievable.
Like, if you just look at if you could co-optimize production and consumption of power, like there probably
is on the order of like one or two magnitudes of efficiencies that we could drive, which
would be unbelievable.
And then, you know, that's without sort of having the even bigger thoughts about like
what could you do with some of these big machine learning models to, like, design better systems that are, like, fundamentally more efficient in and of themselves.
Well, so here's an example of something that, you know, is a mixed sword, mixed, you know, feelings on.
We also have the ability to model what land will be arable.
Mm-hmm.
And we can really think about the future of agriculture,
the future of water supply.
Who controls that information?
Who controls the decision-making that happens from that information?
So that's that moment where I'm like, okay, we're getting there.
We actually have a decent understanding.
But if we're at a point where that material gets co-opted,
it gets controlled, then I'm deeply concerned.
So these are the contradictions I think we're sitting in the middle of.
Because if we can really understand, I mean, where did data analytics begin?
Farming.
If we can really understand what this will do to ag, we're going to be able to better
build resilience.
And that's those moments where I'm like, okay, this is not about just NOAA, the National Geographic and Atmospheric Administration.
It's not just about NOAA being able to model, but it's also being able to give that information
publicly in a way where it doesn't get perverted for political purposes.
Right.
And that's a tricky thing right now.
Yeah.
And, you know, on the hopeful side of things, you know, what we've even seen at Microsoft with some of the stuff that's happening with this FarmBeats program that's happening at Microsoft Research is that you can take some of this data, so like the weather data,
weather forecasts, like all of the sort of historical information, like stuff that like
used to get embedded into a farmer's almanac, which was almost, you know, like a little bit
like astrology. But like there was real, you know, data and trending that people built into
these almanacs that help people decide like very prosaic things like when to put the seeds in the
ground. And like we know that if you apply technology to that process, to very simple
things like when to plant in a particular location given historical and predicted weather trends, that we can make
huge improvements in crop productivity.
Like, we see it in India where, you know, some of these very poor parts of India, like
when you put a little bit of technology in, like you can get double-digit percentage improvements.
And like, that is the difference between people starving and people getting fed.
Oh, absolutely.
And it's just great to see happening.
And the important thing about something like agriculture is it has to happen around the globe.
It has to happen.
It just has to.
And same with water resources.
Yep.
We need to understand and model out water resources because, I mean, just take the continent of Africa, right? There's so many places across that continent where things are possibly fragile if we don't
work out where that is or how to deal with it.
And so it's both the technology of desalination, which is wonderful,
but it's also the modeling to understand what the ripples are around that.
And there's so many ways you can, I mean, this is the thing where I really want people to get
like super excited about jumping in because for all of these things, like making better use of your water resources, like, there are hundreds and hundreds of ways.
Like, so, for instance, like, one of the ways that you can make more efficient use of water in agriculture is, like, all of the agricultural chemicals that we use, so pesticides and fertilizers and whatnot, are massively diluted
with water. So, like, the chemical concentration, like the active compound, is like a tiny part of,
like, the thing that gets sprayed over the crop, which means that you're wasting all of this water,
that the, you know, chemicals are going in the places where they're not needed. It's just this
hugely wasteful thing. And there's all sorts of interesting new technology where you can very
precisely deliver the chemicals to the crop without diluting them in water at all. So,
you're not wasting any water. You don't have any of this chemical runoff into the water supply.
It's just fantastic. And simple things like using some of the cool new stuff that we're seeing with computer vision where you can fuse classical sensor data like moisture meters with vision models where you can sort of infer soil moisture from pictures that you're taking from above the crops with drones or in places where drones are too expensive,
like the FarmBeats folks are literally tying like little cheap cameras to balloons.
And you have a human like walk like a balloon over the crop, you know, tethered to a rope
because, you know, in some parts of the world, you can't afford a drone to fly over them.
And from that, like you can, if you know what your soil moisture is, like you know exactly how much to water so you don't have to worry about under or overwatering a crop,
which leads to, like, way more efficiency. So, it's just so damn cool what's possible.
And that I will say is, like, that's also the technology mind, which is, like, you know,
I live in New York City. And one of the funny things about living in such a crazy urban
environment is to wander around and be like, I can see how this could become more efficient.
Ooh, and if we did this and this and this.
And that is that moment where you see the real hope and the real excitement, which is that we can actually do things that would solve problems, especially like nothing to me is sort of more interesting than seeing all those infrastructure layers.
And I think the question for me is how do we get not just the technology but all of the things that are surrounding the technology to make that happen?
Yeah.
And that's where we have to realize that those technologies are only as powerful as the political and social processes surrounding them.
You know, I can talk about how to make my, you make my building that I rent in more efficient,
but if I can't convince developers, if I can't convince the city
who is setting out the regulations to set these things in motion,
no amount of good technology can solve really valuable problems.
And that's where I think that that coordination becomes so critical,
which is the technologies in many ways, we're at a point where they're moving faster than the political and social structures to make them function well.
And that is why I think we need, even as we invest in having people build up technical skill, we need to invest in people building up the ability to think about the bridge because without that, you can't actually deploy at the levels to make a difference.
And that's one of the reasons, like, I'm firmly a believer that we need societal kinds of regulation,
and I'll use that abstractly rather than government,
so that we can actually advance the development of these things.
I think we all have very concrete roles that we can play in it.
But like the thing that I think we technology folks like have a special duty and obligation
to, and you inherently get this, like you've been doing this since the very beginning,
is like all of us every day should be asking, like, how is it that the thing that I'm doing
right now is going to positively accrue to the overall social good?
Like, if you can't answer that question in an affirmative way, then maybe you're doing the wrong damn thing.
Right. No, I agree.
And I think this is also where I'm a big believer in putting yourself in networks where this is in conversation.
It's like one of the things that really struck me in back,
especially when I was doing old dev days,
you can imagine the positiveness,
but you actually need people around you
who are thinking about and how to implement,
which is like everything from business to policy, et cetera.
You need people around you saying,
and what if this goes wrong?
You need to be doing this in networks,
in communities, and you need to be thinking with all of the different affected communities or the
people that you're trying to really engage and create possibilities because they need part of
that conversation. And I think, you know, one of the weirdest things right now as I'm, you know,
trying to do this exercise in coordination around differential privacy.
It's like the technology will get there, hopefully as fast as we need it to, but it will get there.
But we need that buy-in process.
We need people understanding it.
We need people really embracing and figuring out how to make it work.
Or we're going to end up in a weird moment where we have this beautiful object sitting on a shelf that we're going to look back, you know, 15 years ago, we had it.
Why didn't we put it out there?
And so that's where it's like as you're thinking about the goodness,
think not just about like the goodness, you know, of that,
but like how to actually build your teams and your communities
in ways that actually can make this really be part of it.
And I'll say one of the most
powerful things that I learned from an old mentor is that there is nothing more successful than
getting a community to think it's their own idea, right? And so this is one of those moments where
as an industry, we've gotten into a bad habit of telling people what the future should be
rather than inviting them to co-construct it with us. And that co-construction, I think, is what we need to make all of those beautiful things
that we can imagine in our minds become truly real.
Yep.
Totally, totally agree.
So, before we run out of time, so for all of the little Dana Boyds running around right
now thinking about what they're going to be when they grow up and like for the moms and dads of those kids who are thinking about their future.
Like what advice would you give to them? with is this weird pragmatism and idealism, right? Which is that, you know, we often talk about
going and learning technology because it's pragmatic, because it's like the way of getting
a job. And that actually doesn't get us there. Part of it for me is like switching is like,
what is the idealism? What's the world you want to build? And what are the building blocks you
need to get there? And, you know, I can certainly say that.
I mean, like, you know, if I was a kid again, I would be looking at bioinformatics with like, be like, ooh, right?
That to me is like an opportunity.
Or what are these things?
Because like you look at the puzzles you want to be a part of, the conversations, and you think about all of the pieces.
And that to me is what education is about.
Education is about giving yourself as many of those building blocks as possible and giving yourself the space to learn and love learning.
I would say for, you know, the parents out there of these little people, you know, the biggest challenge is to, you know, as a parent is how to create the space for children to love learning, to be creative, to have fun. And we're in a moment of
high stress, high panic, you know, massive control. I still, you know, when I go out and see young
people, you know, I spent so many years talking to teenagers, the amount of anxiety and stress
and pressure on them, that doesn't help build the future. That helps make certain a bunch of people are normalized into a logic that is destructive.
So I think that as the parents out there, it's like help create those imaginations and create space for those kids to build their social networks and their opportunities so that they can flourish.
And that's that moment where you realize as a parent you are a foundation and you need to be the shoulders that your child jumps off of.
Awesome.
Well, I'm certainly glad that tech has folks like you thinking about all of these interesting challenges that we have right now.
So thanks very much for taking time to talk to us today.
Thanks for having me.
Awesome.
Well, we hope you enjoyed Kevin's interview with Dana Boyd.
Yeah, I, I've spectacular conversation as it always is with Dana.
No, it was so many interesting things just sitting here, just listening to the two of you talk.
One of the things that struck me was something she was talking about with her background and how
she has this kind of tenacity where she was motivated growing up by being told, you can't do this. And, you know,
she basically decided, even though things happened to her at Brown, I'm staying. She insisted on
staying in those places. Wow. What great resolve and what a great thing to kind of take through
with you. You probably have that too. I know that I do. Oh, I definitely do. Yeah. I definitely do.
Like her, I think it's probably a core part of, I think you're probably the same way,
a core part of who I am.
Tell me I can't, I'll show you I will.
Yeah, for me, it's the finest flavor of motivation.
And it's not necessarily the greatest thing in every aspect of my life.
That's true.
But it's sort of done okay for my tech career.
Well, and I think that's kind of the story of, in a lot of ways, of a lot of tech innovators,
right?
Is it's all about doing things that you've been told are impossible or that can't be
done.
Yeah.
And look, I think one of the lessons that we can draw from some of these experiences,
and I was thinking of this as Dana and I were having this conversation, is what you can do when you see one of these folks who is getting
a little bit of pushback is you can sort of do everything in your power to help encourage
them.
Because even for folks who are wired where like their first impulse is to say, oh, I'll
show you, it's still a hard thing.
It feels very lonely.
It feels risky.
It's like a really uncomfortable place that you put yourself in.
And like one of the things that we could do better at as a society is like helping each
other like when we're in those moments.
You bring up a really good point.
Having mentorship, supporting people,
both the act of finding a mentor,
but I think even more being mentors to others.
We've talked about this before.
I think that's really important.
And I think in the context of what,
you know, Dana's story and how she came up,
I think there's some proof
that she had people in her corner,
thankfully, who were in addition
to her own tenacity,
who were really willing to stand up
and say, no, you've got this. Yeah, her anecdotes about Andy Van Damme, and like for those in the
audience who don't know who he is, Andy Van Damme's like a very famous computer scientist. So he's one
of the central figures in the modernization of computer graphics. So like all of those Pixar
films that we all enjoy so much, and like it it's just become this pervasive part of the way that we experience the world
is high-performance computer graphics.
Andy was a big part of making all of that happen.
My computer graphics textbook that I had in grad school was written by Andy.
And so the fact that she had someone like Andy who just decided to, you know, look at this, like, very willful young student and say, like, this is great.
I'm going to take her under my wing and I'm going to, like, help her be her best self.
Like, that is really a special and unusual thing and thing and like what we need more of in the world.
I couldn't agree more.
It was interesting.
Towards the end of your conversation, you were talking about some of the things that she finds interesting happening with technology right now.
And she made a comment that I thought was really, really smart.
She said that, you know, we're kind of in the middle of this reckoning right now where as a society, we're having to come to terms with what control we've let our tech take and what
our tech is doing. But the innovation comes out of reckoning. And she alluded to this, I guess,
especially when we get into areas of climate. Do you have any kind of thoughts on that? Because
I thought that was really profound. And I think that especially, you know, she pointed out like,
you know, data science, the background is agriculture. When we think about climate and when we think about the good things we could do with technology, it does strike me that maybe this would be an opportunity for something to come out of this reckoning we're having rightoning with the role that technology is playing in society
is actually a good thing on multiple dimensions. Like, it's a good thing because it means that,
like, technology is no longer, you know, this sort of appendage that we've attached to society. Like,
it's just sort of a reflection that it's deeply integral to our day-to-day lives.
And as a consequence, it is a reasonable and, like, thoroughly good thing that we're having
a more robust debate about technology's role.
And I think she's 100% right.
Like, every time that you get a new set of eyeballs onto something that is powerful,
like modern software, modern digital technology, or AI, machine learning, like all of these things
that we talk about on this show, that new perspective and like the scrutiny creates all
sorts of fantastic things. Like one of the things that I'm really hoping for is that the attention that we're placing
on all of this stuff right now results in us, like, being very, very thoughtful as both
the tech industry and as a society about what foundational pieces of technology that we
need to build in order to enable people to do all of these amazing things, like go make
our agriculture more productive so that we can feed more people and so that like our
food supply is more resilient in the face of climate change, so that we can go attack
things like climate change, so that we can go attack things like the cost and availability
of high-quality healthcare.
Like, I think all of these things are, like, as soon as we're paying all of this attention
to, like, both downsides, but, like, at the same side that you're seeing the downsides
of technology, you're going to see the upsides as well.
There's an opportunity, right?
I mean—
There is opportunity.
And I thought that was the most interesting thing is that when we grapple with these questions,
and if you look at the history of not just Silicon Valley and not just technology, but,
you know, industry as a whole, there are always these pockets where when these questions are
being asked, that tends to be when really big things happen.
Yeah.
And again, you know, I said it to Dana, and I will say it again.
I say it in my book.
I'm going to, like, repeat it 5 million times because I think it's really, really important.
Like we should all in the technology industry be asking ourselves every day, like how is it that the thing that I'm doing right now is going to create more good for society?
And like it can't be, you know, a trivial, convenient answer that you're giving to justify another important thing that you want to
do. It has to be a legitimate, like, oh, no, no, like, I know that this thing is going to, like,
produce a good that is bigger than me, bigger than my job, bigger than my company.
Definitely. I'm glad you keep focusing on that. Well, we are out of time for this show. Thank you
to Dana. And as always, we'd love of time for this show. Thank you to Dana.
And as always, we'd love to hear from you about ideas for future guests or anything on your mind.
So email us at behindthetech at microsoft.com.
Thank you for listening.
Yeah, thanks.
See you next time.