ACM ByteCast - Denae Ford - Episode 12

Episode Date: February 24, 2021

In this episode of ByteCast, Rashmi Mohan hosts Denae Ford, a Senior Researcher at Microsoft Research in the Software Analysis and Intelligence Team (SAINTes) group and an Affiliate Assistant Professo...r in the Human Centered Design and Engineering Department at the University of Washington. Her research lies at the intersection of human-computer interaction and software engineering. In her work she identifies and dismantles cognitive and social barriers by designing mechanisms to support software developer participation in online socio-technical ecosystems. Ford is also a recipient of the National GEM Consortium Fellowship, National Science Foundation Graduate Research Fellowship, and Microsoft Research Ph.D. Fellowship. She is best known for her research on just-in-time mentorship as a mode to empower welcoming engagement in collaborative Q&A for online programming communities, including open-source software and work to empower marginalized software developers in online communities. In the interview, Ford relates how an undergraduate research project inspired her to pursue a PhD in computing. She describes her approach in designing various research studies, the process she used to identify challenges and barriers to engagement in communities such as StackOverflow and GitHub, and how she and her collaborators went about building interventions. They also discuss how some of these interventions can be applied by industry. Ford also shares some future directions and developments in computing that most excite her—and the possibilities in making the field more equitable and inclusive.

Transcript
Discussion (0)
Starting point is 00:00:00 This is ACM ByteCast, a podcast series from the Association for Computing Machinery, the world's largest educational and scientific computing society. We talk to researchers, practitioners, and innovators who are at the intersection of computing research and practice. They share their experiences, the lessons they've learned, and their own visions for the future of computing. The world of software development is built on the foundation of open collaboration, peer learning, and communication. But in a world where half of those voices may be muted, how do we expect to reach great heights of innovation that the field of computer science promises. Our next guest is a
Starting point is 00:00:45 human-computer interaction researcher who works on uncovering the barriers to inclusion and participation in the world of software engineering. Denae Ford is a senior researcher at the Software Analysis and Intelligence team at Microsoft Research. Denae, welcome to ACM ByteCast. Thank you for having me. And thank you for this amazing introduction. That was very well said. I appreciate it. Thank you. And it's all you.
Starting point is 00:01:10 And you have so much depth to your work that it was very easy for us to cobble together something that would represent the amazing work that you do. But I'd love to lead with a simple question that I ask all my guests, Dine. If you could please introduce yourself and talk about what you currently do, as well as give us some insight into what drew you into this field of work. For sure. So I'll introduce myself again. So I am Denae Ford Robinson. I published under the name Denae Ford. So that's why you will also see my publications under my pen name. I came from a computer science background, a software engineering background. My bachelor's, master's, and PhD are all in computer science. I also have a graduate minor
Starting point is 00:01:52 in cognitive science, which was really important for me to obtain or to kind of get a little deeper understanding of how people engage in communities and the intentions behind that and understand a little bit of cognitive theories that are at stake or are in progress here in these spaces. So a little bit about, I think you asked what got me into this work. As an undergraduate, I was also an undergraduate researcher in my university, and I was able to work on this really cool project called Java Tutor, which is an artificially intelligent tutory system with Dr. Christy Boyer and Dr. Joseph Grasgaard. And it was really exciting to see how important and relevant it was to how people ask questions, although it was an automated tutor. We were seeing how students ask questions and how we were able to perceive the best questions they asked, how they perceived
Starting point is 00:02:41 the responses they received. And it was really exciting to me to figure out, like, this is what science can be. You can study these questions that impact humans directly and how they use computing systems. That was really an amazing experience for me, which kind of thrust me into my PhD research, which was understanding how people engage in online programming communities. And not just engage, but meaningfully engage and find value in and feel welcomed and find belonging in. So a lot of my research focuses on, or my dissertation research focused a lot on Stack Overflow, which is one of the online programming communities that I've mostly focused on. And then GitHub, as I transitioned towards the end of my PhD.
Starting point is 00:03:23 Understanding how developers engage in online communities from Q&A to open source was really all a part of the software developer's life to be able to share knowledge. One of the things that I was reading about your work was around socio-technical ecosystems. I was wondering if you could help unpack that a little bit for us. What does that even mean? Yeah, so the term socio-technical ecosystems kind of originates from this description of Trish's description of how people engage socially and technically through communication, which is essential for professional work. So this term has basically been adapted to ecosystem as the sustainability of the community relies on reinforcement of consistent social and technical dialogue from group members. So what makes these communities, what I focus on, online programming community, which is a type of socio-technical ecosystem, is that many of the members and their actions and how they And when I say architecture, I mean, is in mechanisms of engagement and governance as it will help people felt comfortable to do and how
Starting point is 00:04:50 we can develop new mechanisms to encourage people to engage in meaningful interactions in online programming communities. Got it. What really makes you interested in this, Dini? I mean, why is this important to a software engineer? Why is that engagement important? Yeah. So for me, I really got interested in this whole area. I mentioned my undergraduate research experience, but as an undergraduate in my computer science courses, I found that being vulnerable to ask for help or figuring out the right ways to ask for help and where you feel comfortable seeking that information was so important. And even how you go down to like how you ask a question, because that can dictate the answer that you receive. It was really interesting to me about
Starting point is 00:05:34 how so many of the computer science students use and rely on Stack Overflow to read, but not much to like contribute. So actually, I really took that interest into my PhD program, where I told my advisor when I joined, I was like, I want to figure out, I did actually say, I do want to figure out people who, how people engage in online communities.
Starting point is 00:05:55 It was kind of broad. And I was like, people who look like me who engage. He was like, well, that's too broad too. So we narrowed it in on what are the challenges people, specifically women is what we started with, engage in on-stack overflow. And we identified a set of barriers and that actually kind of dictate the rest of my research moving forward, which is identifying the challenges people have in engaging in online programming communities. Step two, understanding what
Starting point is 00:06:22 mechanisms they're using to overcome them. And maybe they're not official mechanisms, but maybe these are things they're doing ad hocly to make these systems, ecosystems work best for them. And then step three, using all those learnings for those prior two steps to understand how we can build interventions on platform that can keep people engaged and make them feel welcomed and more comfortable asking questions and asking for help. That is such a valid point. I mean, I think I'm guilty of it myself, being more of a reader than a contributor. Sometimes it is you feel like you're not qualified enough. Sometimes
Starting point is 00:06:56 it's a little bit of intimidation because there are people there who seem to be more knowledgeable than you and you're afraid of maybe saying the wrong thing. But there is tremendous value in being able to participate and being able to give back as well. I mean, I know that from other interactions in my life, that the ability to give back builds confidence in your own talent and your skills. Let's sort of break down the three parts that you spoke about. The first part was really, what are the biggest barriers? So I don't know if you could give us some examples of what were the biggest barriers for people participating in these communities. And it sounds like there are specific groups of people that have a particular challenge in participation. Yeah, so I'll start off with step one. So the
Starting point is 00:07:39 reason we actually went in identifying the barriers of women who engage in these communities is because when we dive in deeper into the annual reports that Stack Overflow publishes externally, this was the group that had the least amount of representation. For context, a lot of software development full-time roles kind of report about 20% to 28% women in their fields. On Stack Overflow, consistently, it was below 8%, 7.8%. I think before they started doing the reports, a lot of empirical software engineering researchers were finding 5%. So think about this. On the internet, you can be anyone you want to be, right? But you still didn't feel comfortable asking for help because, so there's social barriers and cognitive barriers that are at bay here. So because we wanted to figure out this is the population that is not engaging, let's start there. So the goal for my research is really
Starting point is 00:08:35 target the few, help the many. In all the steps of the, my research directions I mentioned kind of hint at this, where although we started with identifying the barriers that women experience in these communities, we end up identifying that these are barriers that exist for people across the gender spectrum. And we're able to use these barriers in order to build systems that can benefit the entire community, such as the Staff Overflow Mentorship Program, which was informed by the barriers that we found from the first round of studies. So that was really exciting. Did you ask a question about which barriers were like really interesting? Yeah, yeah.
Starting point is 00:09:12 The fact that you brought up a few barriers, but yes, which were the ones that sort of really stood out to you? Yeah, so we actually did identify about five significant barriers as in like stats across men and women. But the point was not just to say that these barriers are divergent. They exist across the spectrum. They're not dimorphisms. Everyone faces these challenges. And some of them are like this fear of negative feedback, for example, which is what one participant mentioned, like, I fear my posts may be harshly criticized. So I'm discouraged from posting. Another one is this idea that is the intimidating community size is quite discouraging. So the way I kind of described this barrier was a statement from a participant who said,
Starting point is 00:09:52 I feel intimidated by the large community of users. I instead prefer connecting with a smaller and more intimate group. So you were seeing that this idea of sub-communities or having a safe space, if you will, was very valuable for participants and just contributors on Stack Overflow to have. And of course, we've seen ones that actually resonate quite well with those across the gender spectrum, which is this qualification barrier where participants mentioned, you know, I feel my expertise or answers would not be enough to help anyone else. That sounds a lot like imposter syndrome, right? Manifested in a different way. So what was really, again, really interesting here is that although we were able to kind of identify that some of these barriers did significantly hinder women more, we saw that the use, understanding how these barriers can or how we can flip the bit on each of these barriers to turn them into interventions so for example if someone mentioned intimidating community
Starting point is 00:10:49 size is discouraging for me like i mentioned how do we make the community size feel less intimidating let's make sub-communities for example so that's one of the benefits of the stack overflow mentorship study which is we essentially created a sub-community where people can draft their questions. Then they're not published to the entire community just yet, but you have a sub-community, you can get feedback on your question and be able to have a more meaningful and fruitful engagement on the answers that you receive later on. That's incredible because that's the richest way to enhance a product. I mean, I'm thinking about this from Stack Overflow's point of view, right? Getting that feedback from your user community on what would be most valuable
Starting point is 00:11:28 to them really drives improvement in the product in the most sort of significant way possible. How did you go about experimenting with this? Did you try it out and measure it in some ways to see if there was any improvement in the participation numbers or metrics that you were measuring? Yeah, yeah. So I'll talk a little bit about that. So from the barriers work, there was another one of like understanding mechanisms people are already using or understanding mechanisms women are using. But the actual third part, which is the intervention and learning from these mechanisms, is collaborating
Starting point is 00:12:01 with Stack Overflow using like this mixed methods approach, where we took a qualitative and quantitative approach to understanding how people engaged. So we kind of created this pilot study where first, in order to figure out what we needed to work effectively on the larger scale platform, we used Slack at first, actually, to do a smaller experiment to see how mentors and mentees would engage. And ultimately what we found is that the timing was very important. The iterativeness of like being able to get iterative feedback on what you're asking for help with. And we also learned that we wanted to create a supplemental experience to what people were already having on Stack Overflow. We didn't want to build a new Q&A. We wanted to build on what already existed.
Starting point is 00:12:47 So teaching people to ask better questions, not answering questions for them. So it was a large learning experience there. And from that, we implemented this collaborative editing feature, which essentially took advantage of the chat feature that exists on Stack Overflow. So you have chat rooms on Stack Overflow for a
Starting point is 00:13:06 host of topics, languages, and people can spin them up for other different types of projects that they're working on. So we used an instance of that, or four instances of that, and created private spaces where you can actually post your question there, edit it. And then from there, after you get feedback from a mentor, you can choose to launch your question whenever you feel is appropriate. So it really helped novices engage. So we had over 70,000 eligible novices. It was a 33-day study with four help rooms and one private mentor room. But we had 70,000 eligible novices available to receive the intervention or did receive the intervention. 520 actually accepted it and entered the help room. 271 novices engaged in conversations, 343 conversations with our 63 mentors.
Starting point is 00:13:59 From there, we had a lot of data to analyze. We had transcripts from the dialogue between novices and mentors, question scores, as in what was the scores of the novice questions after they received help. We also surveyed novices and interviewed mentors about the experience because since this was a live feature on Stack Overflow, we wanted to make sure that there was some benefit to the mentors who were engaging here, who volunteered their time, and the nonprofits who were trying to figure out their new niche or get their footing, if you will, in the community. As I'm listening to you, it just sounds like such an empowering experience to be able to help these people feel comfortable asking questions,
Starting point is 00:14:42 being able to get that help in a smaller setting so that I feel more comfortable with what I'm putting out there and more confident. I think you preempted my next question, which is really, why do the mentors engage, right? What is it that they get out of this experience? Is it just goodness of my heart, I want to help the community? Or is there more to it? Yeah, so actually, the way it started is, well, we posted on the Stack Overflow meta page that we were running this experiment or we wanted to do this. We had an idea of this coaching experiment. What should it look like? And on the Stack Overflow meta page, you can actually get feedback from the community members about new interventions, how they feel about new features
Starting point is 00:15:21 or old features that already exist. So it was pretty helpful to get feedback there. And then in a separate post, we asked those who were interested in contributing or being a part of the program to volunteer. And we had a stream screening process where we kind of checked if they had teaching experience, what was their motivation for engaging, and to make sure we didn't have just trolls sign up,
Starting point is 00:15:43 which is where collaboration with Stack Overflow was very helpful here because they have their own list or blacklist of people who may not have the best of intentions. So they were able to scrub those people out before we allowed them to enter this experiment. But mentors really reported that it was really exciting to be able to help novices do their first interaction and very fruitful for them because it got them thinking about, again, what it was like to be the first time posting on Stack Overflow and reflecting on that experience. And maybe even moving forward, how would they impact how they moderate questions? No data on that part yet, but there was a lot of discussion about what could be.
Starting point is 00:16:19 And I think that was one of the goals to get that conversation going and get them thinking about what the new Stack Overflow, what community we want to have here, what we want it to look like and how welcoming should it be. Yeah, I mean, that's incredible because the mentors that you probably picked are who we may call quote unquote influencers, right? They probably set the tone of the conversations in many of these forums. And so having them be aware of these challenges and start thinking along those lines in itself is such a significant way to move
Starting point is 00:16:51 the needle in the right direction. Oh, for sure. And for novices, think about it from their perspective. For them previously being workers, just observing how people engage in the community, there's also a barrier about this conflict fatigue of watching other people get their questions voted down now they're able to see the other people actually support each other because the way we set up the mentor help rooms was that a mentor was helping a novice or multiple mentors keep helping multiple novices in the same room so they now have this legitimate peripheral participation or this example of like what's appropriate and what's effective and hey I'm not the only one seeking help here there's other people struggling there's other people having receiving help and they don't feel as isolated anymore from the novice perspective
Starting point is 00:17:35 got it is this a formal program now Denae is I mean post the experiment what was the sort of the life of these changes so post the experiment there actually, we did actually end up writing a paper about the work. And actually, we went back to the same Stack Overflow meta community to share the findings of the work. So this finding, this work was with Christina Lustig, who at the time was the first researcher at Stack Overflow. We had Jeremy Banks, who's our developer, and Chris Parny, who was my advisor. And we were working on this. Following the experiment, we wrote the paper as well. And Christina ended up doing a couple podcasts internally. At the time, the vice president of community and growth
Starting point is 00:18:12 had highlighted the work in their annual Summer of Love post about how to make Stack Overflow more inclusive and welcoming. So that was so amazing for it to be shared that way. There were also some follow-up studies that were going about new question templates and what that would look like. I actually haven't been able to connect with them so much so far because everyone kind of re-orged and moved teams and moved around.
Starting point is 00:18:37 But I'm looking forward to staying connected with the research and we still have data we have yet to analyze and report. So there's more to come, TBD for sure. Wonderful. We can't wait to hear more. I mean, I think this in itself is such a revelation and really sort of opens my mind and I'm sure all of our listeners as well in our own interactions, thinking about what are the communities that we engage in? Do we hold back or are there others that are holding back that we can sort of help open up and participate more? One of the other pieces of information that I gathered as I was reading about your work was this concept of peer parity and how it helps
Starting point is 00:19:16 foster engagement. I was wondering if you could help us understand that a little bit more. Yeah. So the peer parity work is really centered around what I mentioned of the part two of my three-part process, where one is identify challenges, two is what are people already doing? Peer parity is when an individual can identify with at least one other person or one other peer in the community they're engaging in. So essentially, this is really taking the Glitch Triplets idea of social facilitation and how you can empower and encourage people to engage in the activity by doing it yourself, right? By being visible and doing it alongside them. So in this study, we were really measuring, trying to see how women
Starting point is 00:19:59 were encouraging other and engaging with other women in this community to figure out if they actually came back for more. So one of the cool things about this study was that we did see women engaging with other women as in answering questions from other women, so much so that in our peer parity and our non-peer parity distributions, the cumulative density of like the length, the time between re-engagement was about 330 days. So although this was a small sample because at first there's already a few, I mentioned the less than 5 percent and less than 7.8 percent of women in these communities. But the fact that we were able to see that there is some phenomenon of affinity groups that you normally see at software development companies or your women in computer science groups and how think about the mentorship
Starting point is 00:20:46 and guidance that happens there. If there's a way to see, identify how the people do that online, maybe there's opportunities for us to figure out how to make that a concrete intervention and a part of these ecosystems. So I saw that there was value in saying it there, but I'm curious to seeing how that kind of influence how we do other programs like mentorship in person and also online and keep it sustainable.
Starting point is 00:21:10 You know, it's very interesting that you bring that up because in a smaller setting, I think we were talking about code reviews and new people entering a team. One of the engineers that I was speaking with spoke about the fact that in order for them to sort of break the ice with somebody new who's come in and who's not participating enough, they usually offer to do peer programming with them. And they feel like they share their code with that other person so that they're actually making themselves feel vulnerable so that the other person realizes that, you know what, it's not always perfect when it comes out the first time. It goes through multiple reviews by my peers. I'm okay taking that feedback. And so it's okay to share your work because it only helps improve it.
Starting point is 00:21:49 I know you've studied some work around inclusivity in code reviews and peer programming. I was wondering if you could talk a little bit about that. Yeah, yeah. So I would definitely say that the phenomenon you just explained is exactly what the social facilitation is. You said it and I thought about, well, my work is definitely all about vulnerability. That's actually a great way to describe all of the research that I'm doing. How do you get developers to feel more comfortable being vulnerable? And, you know, while they're doing their technical task. So I think that's really interesting. But the paper or the work that you're mentioning is the study I did to understand how people who have reviewed pull requests from people who are on their teams.
Starting point is 00:22:32 So in the studies that we use, I've used the methodology. I use the I-mind hypotheses that people are tending to what they're looking at, as in not to say that it's the most important thing, but it is of interest in some regard to them. So they're looking at it and consuming it. And the methodology for this study was that we presented a profile page of an identifiable man, identifiable woman, or an unidentifiable person, where it's just a default avatar image. We presented their profile page and the single commit of their pull request and asked the participant to decide to either accept, which is to merge, or decline to merge, which is pull request on a scale. But we definitely had a
Starting point is 00:23:18 couple of people who just undecided as well, which is not an accept to merge. But it was really interesting to see in the study how people reviewed three different types of signals, which are prevalent in three different types of ways. So the three signals we looked at were code signals, which the primary, ideally, should be the primary code contribution. So this would be your, when you look at the diff between your code snippets, it's the before code snippet and after co-snippet. Ideally, if all is a meritocracy and that's the only thing that mattered, people should only be focusing on the content of the co-snippet. And that's an ideal world. Right. The second attribute, which is interesting, is the technical signals,
Starting point is 00:23:59 which are peripheral technical signals or peripheral to the code, which tell you a little bit about the person's contribution history, the projects they've contributed to, maybe the title of the pull request and the issue that they're working on. And then we had social signals, which told you a little bit more about the person's identity. So the social signals that included for this study identified the areas of interest being the avatar, their name, the avatar image and name on their pull request page, which for those of you
Starting point is 00:24:31 who may not have a pull request page in front of you right now, it is very small compared to the other technical attributes on the page of a single commit of a pull request. Very small attributes there versus on the profile page
Starting point is 00:24:44 where there's a ton of other technical peripheral information, like your contribution heat map, projects, etc. The interesting thing that we saw here is that all but one participant said they looked at the avatar image. All of them consumed this image. All of them consumed social signals. All but only one reported that it's valuable in some way. So it is not the point of the study was not to call like developers are liars and they're not trustworthy. That's not the goal of this. The goal of this was to understand what signals are we innately even using that we're not conscious of? And how do they heavily influence what decisions we make and how we review each other's technical contributions? Again, majority of people,
Starting point is 00:25:26 many people did focus on the code, but the fact that there were times where there was quite a distinct amount of time, a distinct amount of time spent on social signals, which are significantly smaller, even sometimes three times as small as some of the code signals and technical signals, was quite interesting and important to highlight. I think that's absolutely fascinating work. I wonder, Dine, would you have guidance for people like us in industry? We may not know of the bias, but clearly there is some unconscious bias that's happening there. What would be the interventions that we should be even looking at? Are there small steps that we can start to take to make this a more fair and equitable process? Yeah, I think for step one, we could be very transparent about what we care about when we're reviewing pull requests.
Starting point is 00:26:12 So that's why one of the things in the study I asked about were, what is important to you? And how do you perceive others? I think reflecting on how you review others' work in the standards you hold yourselves up to is a nice reflection exercise to figure out, okay, am I being unrealistic when I'm reviewing others' pull requests about content I'm expecting? Because we did see also quite a mismatch there as well about what type of attributes and signals you were looking for in others' technical profiles and social profiles as well. Having guidelines and perhaps even a checklist of some sorts. I know that's a popular intervention for a lot of the mechanisms developers are kind of recommended right now. Having a checklist to be able to say before you vote or before you merge or decline to merge or leave it kind of
Starting point is 00:27:02 stale was pull request. have you checked these things? Have you already checked the code? Was that the first place you checked? Like, how applicable is this code going to fit into your code base? Like, do you predict any conflicts coming up? Or does this code have test cases that are reasonable and bespoke to the issue that you're working on? I think if we have a checklist about focusing on those types of information or those attributes would be very valuable. But also, let's be honest, if we are going to look at their previous history, we are going to look at the contributions they've had. So let's be transparent about how we use those things and let's not act like it's only a
Starting point is 00:27:42 meritocracy. I think the issues come into play when we pretend to ourselves that we're not using social and peripheral signals about each individual. Once we do become more transparent about that, we'll see a lot more, I think we'll see a lot more interesting discussions about what's important when we're reviewing pull requests. That's a super key point that you bring up. Do you think that there is any room at all for automation here, right? If we reduced the amount of human validation, is there a way, may not be a low-hanging fruit, but is that the way we should be headed where we try and automate most of these things so that we actually reduce the bias? I think the issue with automation, especially at times like this, is that when people result to automation, they're not thinking about the ethical edge cases that in the context that influences automation and who's building automation.
Starting point is 00:28:34 Hence, a lot of bias algorithms figuring out what's important and what's valuable. I think that we have a long way to go before we see some like valuable automation done or one that's kind of well-rounded, especially when we're talking about the prior history of each individual's work. Like when we start checking the contribution activity and the contribution map and popular prideful pride or projects people are taking pride in and pin to their repositories on their page i think we should be taking into consideration that context until automation takes into that context i think we have a long way to go got it yeah no i think that's an incredibly valid point one of the things that i was also thinking about is you know as i'm thinking about a pull request and code reviews i mean a lot of the people that are doing these activities are colleagues of ours, they're friends of ours, and we'd like to believe that they want to be fair. But one of the things, and you know, in many ways, they probably don't see the gaps that maybe or the
Starting point is 00:29:36 challenges that some of their peers might be facing. You know, I'd like to switch a little bit to understand how you became more aware of the fact that the path isn't as smooth for everybody as it is for you. I know you went to a science and technology focus high school. At what point did you realize, and I'm sure a lot of the people that were in your peer group there were similar to you, focused on STEM, probably had the aptitude for math, science and technology. At what point did you realize, not just aptitude, I mean, also the opportunities, right? What point did you realize that the world was not fair to everyone? Well, what point did I realize the world wasn't fair to everyone? I found that out a long
Starting point is 00:30:14 time ago. And we can talk about those experiences, but I don't think that's what you're asking. So yeah, I did go to a science and technology public high school in Maryland. My high school was predominantly Black. My AP computer science class, which is a part of my science technology program, was fairly split half-half men, women. My instructor of my AP computer science courses was a Black woman. My instructors for my other HTML and CSS courses were Black people. And for me, so this was, I never realized that there was a distinction quite so until I got to college and I took my first computer science class and I got an easy A plus and a lot of my colleagues were struggling. And I realized
Starting point is 00:31:01 that two things that one, I guess I was one of the only Black people, women, period, in that course. But two, I had opportunities to thrive. I had opportunity to go to a technology high school where I was able to learn a lot of these skills and learn about object oriented programming through Java and a whole bunch of other programming languages very early, which prepared me and launched me into my successful undergraduate career. Because had I not had those experiences earlier, a lot of people, after they take that first computer science class, they think it's not for them. And they flunk out if they don't do well. I unfortunately had the support system and the resources and the prior knowledge to be able to work through that. For me, I found out that I was going to be quite the unicorn early on once I got into my courses, but I knew that I didn't want to be the only one. So going back to when you asked me about the reason I'm doing this research or how I got into it, that conversation with my advisor, I remember one of the first times I met him and I was saying,
Starting point is 00:32:03 I want to figure out how to be not the only one anymore. Him being his very expertise as an advisor, being able to discern what part of that can we put into research and guided me to, you know, my first couple of research studies, understanding the barriers of women in computing. I think being able to have such a strong community that has been able to pour into me and to support my research, although they may not always understand it or where it's going, has been really helpful. And I really hope that I'm able to kind of recreate or kind of keep that going, be a light to someone else, and eventually be able to encourage others to do very meaningful and impactful research like this. Oh, absolutely.
Starting point is 00:32:45 I mean, I strongly believe you can't be what you don't see. And it's so important for all of us to have role models like you to aspire to. This has been an incredible conversation, Dene. For our final bite, I'd love to understand and find out what are you most excited about in the field of HCI or in your research coming up over the next few years? Yeah. So really being at the intersection of software engineering and HCI, I feel like I have a nice little, I get excited about so many different types of research. That's the beauty of being a kid in a candy store. But if I were to say, what do I see the field of HCI intersecting with software engineering over the next five years?
Starting point is 00:33:29 There's two things that excite me. one is the next generation of software developers who now enters the technical field once they're able to see how impactful the work that they're doing can be so example who's now going to enter be a software engineer because they were able to contribute to an open source project to help trans people find restrooms or find safe restrooms? Or who's glad I'm going to be able to enter the field of software engineering because they watched videos on YouTube and saw someone who was also a truck driver transitioning to a software developer. And they're like, oh, that's my life. And they now can now feel empowered and excited to be a software developer. I'm excited about what the new shape of software development looks like
Starting point is 00:34:06 beyond the traditional classroom in the university context. And second, I'm also looking forward to seeing what that in turn also impacts the tools that they build. The new tools that we haven't even thought of yet that can influence what we use every day. So think about how you use
Starting point is 00:34:22 different types of social media every day. You use LinkedIn quite a bit to connect with professionals. There's a tool that's going to come out eventually, years from now, and it's going to come from someone who's not coming from a traditional background. They're going to bring their expertise from a non-computer science background into how we build systems, and it's going to blow us away, and it's going to be something we rely on heavily. I'm excited for that moment. I think those are both directions that I see a lot of our tech field going in,
Starting point is 00:34:49 the non-traditional routes and supporting those who have the non-traditional experiences that will be valuable for everyone. Oh, your passion is so palpable, Denae. I'm giddy with excitement at the prospect that you just painted. Thank you so much for taking the time to speak with us at ACM ByteCast.
Starting point is 00:35:07 Thank you so much for having me. This was really fun, actually. ACM ByteCast is a production of the Association for Computing Machinery's Practitioners Board. To learn more about ACM please visit our website at acm.org slash bytecast. That's acm.org slash b-y-t-e-c-a-s-t.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.