ACM ByteCast - Juan Gilbert - Episode 55

Episode Date: June 6, 2024

In this episode of ACM ByteCast, our special guest host Scott Hanselman (of The Hanselminutes Podcast) welcomes ACM Fellow Juan Gilbert, the Andrew Banks Family Preeminence Endowed Professor and Chair... of the Computer & Information Science & Engineering Department at the University of Florida where he leads the Computing for Social Good Lab. The lab’s innovations include open-source voting technology to help make elections more secure, accessible, and usable; making voting technologies more transparent; increasing fairness and reducing bias in ML algorithms used in admissions and hiring decisions; and reducing conflicts during traffic stops. Gilbert’s many honors and recognitions include the Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring, the CRA A. Nico Habermann Award, and the National Medal of Technology and Innovation (NMTI). Juan shares with Scott his surprise at being nominated for the NMTI, which he received in 2023 from President Joe Biden for pioneering a universal voting system that makes voting more reliable and accessible for everyone and for increasing diversity in the computer science workforce. He talks about his lab’s mission to change the world by solving real-world problems, and principles such as “barrier-free design” that he and his collaborators applied to his lab’s voting machine technology. They also discuss how his Application Quest (AQ) technology uses AI to help make fairer hiring decisions, and how his students’ Virtual Traffic Stops app helps protect both drivers and law enforcement safe during traffic stops. Juan also explains how he and his lab choose which projects they work on and teases the promise of brain-computer interaction technology.

Transcript
Discussion (0)
Starting point is 00:00:00 This is ACM ByteCast, a podcast series from the Association for Computing Machinery, the world's largest education and scientific computing society. We talk to researchers, practitioners, and innovators who are at the intersection of computing research and practice. They share their experiences, the lessons they've learned, and their own visions for the future of computing. I'm your host today, Scott Hanselman. Hi, I'm Scott Hanselman. Hi, I'm Scott Hanselman.
Starting point is 00:00:26 This is another episode of Hansel Minutes in association with the ACM ByteCast. We are very pleased to have on the show today Dr. Juan Gilbert. He is the Andrew Banks Family Preeminence Endowed Professor and the Chair of the Computer and Information Science and Engineering Department at the University of Florida, where he leads the Computing for Social Good Lab. Dr. Gilbert, thank you so much for joining us today. Thank you for having me. So you have done so many things in software engineering and computer science, research, inventing, educating, that you were not only awarded the presidential endowed chair at Clemson,
Starting point is 00:01:02 but also recently, I understand, were given an award by the president himself. Yes, that's correct. That is pretty fantastic. What is the name of the award that the president gave you, and was that for your work in voting? Yes, I received the National Medal for Technology and Innovation. It's the nation's highest honor for technological achievement, and it's bestowed by the president of the United States. When something like that's coming down the pipe, do you hear whispers, or do they just call you one day? Like, does it show up in your spam?
Starting point is 00:01:36 Yeah, that's exactly what happens. You don't know. I did not know someone had nominated me for this honor. And I just get out of the blue one day in email in 2019 saying that you've been nominated. And if you are willing and interested in this award, your university has to approve it. And so but this is confidential. You can't say anything. You have not been selected, but you're on the list. So that happened in 2019 and it wasn't awarded until 2023. So it took several years for it to actually happen, but yeah, I didn't know. And you just have to sit on that for four years. That's like someone calling you and saying you're the next Marvel superhero, but you can't tell anybody for half a decade.
Starting point is 00:02:32 Yes, that is true. It's very challenging. It was just the previous administration didn't award any national medals. And then the new one comes in and says, we're going to do it. But then it didn't happen for a few years. Well, that's fantastic, though. Congratulations. Thank you. If I understand that you grew up in Hamilton, Ohio, was getting an award from the president something that you're thinking about when you're, you know, bumping around in Hamilton? Not at all. Yes, I'm from Hamilton, Ohio, Southwest Ohio. But yeah, I never imagined this. It was beyond your wildest dreams. I can't even put it into words.
Starting point is 00:03:03 That's fantastic. Now, the project that I think they were the most excited about is called Prime 3. It's called One Machine, One Vote for Everyone. This is an active project, and it's at primevotingsystem.org. It's called the first all-accessible voting system. What does that mean, all-accessible? So, the story is, back in the year 2000, we had a presidential election and in the state of Florida, we had some little controversy. After that controversy, Congress passed in 2002, what's called the Help America Vote Act, HAVA. In HAVA, they had a requirement.
Starting point is 00:03:42 It said every voting precinct was required to have at least one accessible voting machine for people with disabilities. When I saw this legislation, I said, I understand the intent behind it, but they got it wrong. They made a mistake. The mistake was they created inadvertently a separate but equal approach to voting. As you know, separate but equal doesn't really work out very well. And when I reported this concern, I was told there's no other way to do it. We have to have a separate way for people with disabilities. And I said, oh, no, you don't. And we're going to do it.
Starting point is 00:04:22 So we went and created a universally designed voting machine to allow people to vote the same way. If you can't see, if you can't hear, if you can't read, if? We hear about accessibility, A11Y for the shorthand for accessibility. But I've heard design for all. I've heard inclusive design. But I really like that term barrier-free. It's surprising that that's in any way controversial. Yeah, you would think so. But at the end of the day, universal design, barrier-free, all these things communicate the same point, which is separate but equal hasn't served us well.
Starting point is 00:05:13 And with technology and the things we know, we can design for everyone. And that's the point. Yeah, the idea that it is not considered to be, I don't know, that it's considered controversial at all to be inclusive and try to cover all bases seems surprising to me. Now, I understand that these machines use multi-modality. They're not just touchscreens, right? They can talk to you, they can communicate with you in any way that you can be talked to, correct? Yes, that was the design originally. The idea was that it would have audio feedback, so the machine would speak to you. So if you're blind or visually impaired, you could hear valid information and things, and you could respond either with your voice or through
Starting point is 00:06:00 switches or buttons and things like that. And with your voice, we didn't do speech recognition. We did something different. We used the conversational approach. So it wouldn't, if you wanted to vote for, I don't know, Joe Biden, you wouldn't say Joe Biden. It would say, you're voting for president to vote for Joe Biden, say vote. And you could say vote, or you could blow into the microphone as a response. This doesn't sound like a big deal, but if you think about it, it gives privacy. Because if I said Joe Biden, everybody knows who you're voting for. But if you're just saying vote, you would never know who this person is voting for. So they get privacy.
Starting point is 00:06:38 Yeah, that's a great point because it's not just about the accessibility of it, but it needs to be equal to everyone else around you who also has privacy. They can get inside the booth. They can hide their vote. They can have that vote be between them and their government, which is the way it's supposed to be. Exactly. So this has not the main focus, though, of your work. The main focus of your work is human-centered computing. You're putting the human at the middle of everything. I've heard rumor that there's a signs up in the lab or a sign up in the lab that says, change the world.
Starting point is 00:07:10 Yeah, that's one of our mottos. But yeah, human-centered computing. The idea for the Computing for Social Good lab, the work we do, is we like to build innovative solutions to real world or applied problems. And we do that by integrating people with technology, policy, culture, et cetera. So the idea is that we can identify a problem, design solutions with the relevant stakeholders, and create interventions technologies at times, and then evaluate them with the relevant stakeholders. So that's essentially who we are and what we do. Do you think that the personalities of people, and I'm pointing at myself, who get involved in technology sometimes just do tech because it's
Starting point is 00:07:56 cool and forget that there's humans at the center? And is there room to do it because it's just cool? Like we're doing this tech because it's amazing, we should just do it versus human first and then having the human look for this solution? I think it's wrong for all the above, to be honest with you. Doing tech because it's cool, there's certain contexts where that makes perfect sense. I think there's other contexts where the human first and those tend to be places where tech can replace a person and take away a job or something. I think there's room for all the above, and you just got to evaluate it appropriately, given the relevant context.
Starting point is 00:08:37 So you bring up tech taking away a job, whether it be a robot and an automation of some kind in the early 1900s, where we're creating assembly lines and arguably either make, one could say making people more productive and others could say taking jobs away. We're at the beginning of this kind of hockey stick graph of AI and everyone is concerned that it's going to take jobs away, but the more positive people, the more optimistic are saying, no, this will just take away toil. It'll take away the boring part of the job. Where do you see that tension resolving itself? Well, if anybody says they know, I'm going to say they really don't. This is TBD, to be determined. You have these technologies, specifically AI, that will be integrated into
Starting point is 00:09:26 society in different areas. How will it integrate with people? How will that occur? It's just undecided. So it's hard to say how far it will go and what society will look like as a result of that. All we can say for certain is it's going to change things. Our society is going to change as a result of this. To what extent, I don't think anyone knows until it's actually implemented. Because what you will see, a person has a vision for implementing AI in a certain context, and then they do it, and then it fails. And they have to walk away from it. And that happens. One of the analogies that I use, and I'm interested in your analysis of this,
Starting point is 00:10:09 is many, many years ago, they cloned a sheep and everyone's, it was on the news. We all remember, you know, those of us of a certain age, like, oh my goodness, they've cloned a sheep. And then there was a lot of discussion and there was a lot of government meetings. And then everyone decided, you know, we're not going to clone people.
Starting point is 00:10:24 We might clone cells. We could use this for medical reasons, but we as a culture, we as a society are not making copies of humans, and we moved on with our lives. However, in a world of AI where things are open source, it's not as easy for us as a society to decide we're not going to use AI for thing, for whatever a thing is. It's limited really to what country or what continent or what organization wants to regulate it. Does that change, you know, your opinion that we can't all decide, oh, we're not doing generated movies? Nope, we're not doing deep fakes. You're absolutely right. And again, I say you don't know that it's going to work until it's been deployed. Let me give you an example of something. Look at facial recognition. It got very good and they said,
Starting point is 00:11:15 we're going to use it in law enforcement. We could find the bad guy. All right. Sounded like a good idea. It's going to work, but what did it do? It ended up using bias, creating disparities in who it actually identified and things like that, mistaken identities. You started to see that it wasn't as accurate identifying people from certain demographics versus others. They didn't know that going in. And so I think we're going to have moments like that, meaning, ooh, AI would be so cool if we do this thing. And people are going to jump in and then they're going to learn that, ooh, this doesn't work. That's why I'm saying the verdict is out.
Starting point is 00:11:53 You got to, it's one thing to have the ability to do AI. It's another thing to actually deploy it and have it be successful. Yeah, it's really interesting also to think that the internet has flattened the earth in the sense of so many of us think of the earth as having kind of its, and the internet itself as having its own kind of culture. And we kind of mostly agree about stuff. And I was traveling in Korea last week and they they have facial recognition. And their TSA, their airport system was just buttery smooth. And I thought it was amazing. And I was like, I came back and I told my wife, it's so amazing. And it's so high tech. I didn't even have to talk to a person. And the first thing I see in the newspaper is a letter from 14 senators saying facial recognition
Starting point is 00:12:43 at airports is a bad idea and we shouldn't expand it. And I thought that was such an interesting thing because here I'm thinking, how convenient was this? But then other people are saying, well, what did we lose? And Korea and Europe and so many of these other countries that are using this technology think it's awesome, but it feels like we're at a moment where we want to pump the brakes, at least in the US on facial recognition, two different cultures having a dramatically different reaction to effectively the same technology. Absolutely. And we're going to see more of that.
Starting point is 00:13:17 And that's why I'm saying you can't predict what the future is going to be like with AI, all we can do is hypothesize, but until it's actually evaluated and deployed and evaluated, you just don't know. So you all are doing some research in one space with AI that I think is very interesting around hiring decisions, which has the opportunity to be somewhat controversial. We had a job recently that had hundreds of applicants and one could say, hey, we should probably get a computer involved because who can read 800 resumes? But then to your point, bias inadvertently missing out on a great person because they didn't have the right keyword. How are you approaching AI within the context of hiring
Starting point is 00:14:00 decisions? I created what's called Applications Quest. We'll call it AQ for short. I created this technology in response to the Supreme Court decisions on affirmative action in admissions and hiring decisions, the use of race and gender, national origin, et cetera. So the idea of using AI to read an application is not what I do. What I do is I take the application and process it after humans have read it, for example. So the AI I created does not read an essay or anything like that. What it does, it takes application data and it addresses what I call the capacity issue. So the Supreme Court and those folks, they all got this wrong. They thought this thing was about race, gender, national origin, et cetera. It's not. I can prove it because the first decision
Starting point is 00:14:53 was in June of 2003, and we just had another one last year on this very issue, and it's not going to go away. The reason is because they're addressing the wrong problem. The problem is a capacity issue. When you have more qualified applicants than available slots or offers, by definition, you got to turn away someone who's qualified. Well, you turn away someone who's qualified, they know they're qualified and they're upset they didn't get it. They want to know why. Well, the easiest scapegoat is race, gender. So that's why we're here. You take those things away. They're just going to move on to the next thing. They are talking about legacies. Then they're going to be talking about athletes.
Starting point is 00:15:34 And it's just going to keep going until you address the capacity issue. So the AI I created addresses the capacity issue. And as a side effect, you get what I call holistic diversity, diversity across many different attributes, not just race, gender, et cetera. So that was the approach I've taken. And that's the problem. The idea of holistic diversity is a really nice one. I love that. That's an academic term.
Starting point is 00:15:59 It's much more sophisticated than what I've told my coworkers and friends, which is that tech should look like the mall. And you go to the mall and you see just this wave of America kind of coming at you as you walk around in the mall. And it's very natural, roll the dice, representative group of people. Now, when you say capacity, is the capacity of the academic system to accept people or the number of jobs available or the number of teachers available? What is the capacity of the academic system to accept people or the number of jobs available or the number of teachers available? What is the capacity that we are lacking? No, it's not we're lacking. It's just limited. So University of Florida has a freshman class. We have a limit abound on how many people we can admit. There's a scholarship available.
Starting point is 00:16:42 We only have 10 of those. Your organization is hiring. We only have two spots. There's a scholarship available. We only have 10 of those. Your organization is hiring. We only have two spots. There's the capacity. So if you only have two spots and you only have capacity, say, to interview five people and you had 100 applicants, like you said, then, you know, you look at it and say, well, I got 150 of them meet my minimum qualifications. Uh-oh, now I got to turn away 45. Which 45? So this AI can select which five or recommend the five in a way that is, they're qualified and holistically diverse. ACM ByteCast is available on Apple Podcasts, Google Podcasts, Podbean, Spotify, Stitcher, and TuneIn.
Starting point is 00:17:27 If you're enjoying this episode, please do subscribe and leave us a review on your favorite platform. Very cool. Very cool. How do you decide as a lab which projects to work on? Do you have a brainstorm or a eureka moment? Or maybe one of your TAs or one of your PhD students has an idea, and then you all get together and vote on it? It's all the above, to be honest. What happens is a lot of our ideas come from society. What's the issue? What do we care about? What are we motivated about? What do we want to address? So a lot of the ideas come from events. So I'll give you an example,
Starting point is 00:18:07 a new project in the lab primarily. I walked in the lab one day and a group of my students were sitting around the table with sad faces. I said, what's the matter? They said, well, we're tired of seeing people get shot during routine traffic stops. I said, whoa, really? Okay. Well, me too. Let's do something about it. We went to the lab and we created a technology called virtual traffic stop. So it just came about as a result of events. And this is an app that you can install. It's in the Apple App Store or Google Play. And are you learning, Are you being trained how to be safe during a traffic stop? Or do you run it while you are in a traffic stop?
Starting point is 00:18:51 So the way virtual traffic stop works, here's a scenario. You get the app and you have to register and you put the car you're driving and all that stuff. So let's say you've done that. Your information's in. And the law enforcement officers pull you over and they have to have it too. So the idea is that when you get pulled over, you can open virtual traffic stop and initiate the virtual traffic stop. So what is a virtual traffic stop?
Starting point is 00:19:23 It is a video conference between you and the officer prior to them approaching the vehicle, if they have to approach it at all. And you can have a third party of your choosing. So if you're a kid, your parent could come in. And the idea is this is a de-escalation tool whereby law enforcement and the drivers can have an ice-breaking moment where you can know who they are, what you're stopped for, what they're doing, what's in the car, all this information. So if they approach the vehicle, it's de-escalated. They don't have the same unknowns and tensions. Wow. That is a really crazy idea.
Starting point is 00:20:04 I love that. The idea also, as a son, I have two sons, 16 and 18, and my 18-year-old had his first traffic stop going five miles over and describing the feeling of his heart pounding in his chest while the guy is sitting there in the car and he doesn't know what's happening. And is he going to get out? Is he not going to get out? Being able to have a third person, an advocate, whoever, whether it be for a young person or someone who may have trouble communicating like a hard of hearing or a speech impaired driver. Exactly. And we designed it for that. If you are a hard of hearing, speech impaired, you can actually specify that. And instead of speaking, you can chat with the officer.
Starting point is 00:20:46 And by the way, the whole thing is recorded. And once I process the video, it's made available through the app to you at the same time law enforcement gets it. So I created this technology in response to incidents. University of Florida Police Department has the app. They're the only law enforcement division that have it right now. But people out there, you can get the app and look at it. The website has videos. And if your law enforcement wants it, we can sign up, do pilots and check it out. It works. That's so interesting. Let me ask you this. When people are listening to podcasts or reading the news about stuff like this, in such a polarized time, they might say, oh, applications quest, that's too woke, or virtual traffic stop, that's not needed. Is that a failure of empathy, perhaps, on the part of the reader or the listener who's not seeing that, well, I don't need that, therefore it's not needed?
Starting point is 00:21:45 That's one way you could look at it. I think it's just, you got to reach people where they are. So like virtual traffic stop, people ask questions and I say, you know, this is giving cops safety. What most people don't know, cops, during traffic stops, cops that are injured or killed, it's not so much that the drivers are shooting them or injuring them. They get hit by cars. Virtual traffic stop, worst case, minimizes the amount of time they're outside the car. And if you look at all the incidents that happened that went sour, they have one thing in common. All officers say the same thing.
Starting point is 00:22:24 I did not know. And there's something they didn't know that caused them to react the way they did. Well, with virtual traffic stop, you can know. So it's keeping law enforcement safe as well. Yeah, that is a very thoughtful and very balanced way to do things. I appreciate that perspective. Let me pose the question this way. This is what I ask officers. There's a car you pulled over. Person legally has a firearm in the car. Wouldn't you like to know it rather than discover it? Yeah. They could literally tell you you're on FaceTime, effectively on a video call, separated from each other, and you're like, heads up. And they could even hold up their permit, and you could scan it, and you're like heads up and they could even hold up their permit and you could scan it and you can do all that from the safety of your car and it's recorded on both sides.
Starting point is 00:23:09 Exactly. Exactly. So from law enforcement, there's a lot of bonuses to this. There's safety, there's identifying this information. And I'm in Florida. I keep saying I'm taking bets. There's going to be a Florida man incident. Somebody's going to get pulled over. They're going to have a third party come in. And the third party will say, Bob, you been drinking again? Yeah. Getting that heads up of what am I getting into before I step out onto the shoulder, to your point, is super, super important.
Starting point is 00:23:41 And that idea, that came because some of your students said, this is something that we're interested in exploring. Exactly. And that a lot of our things are driven by news or incidents that affect them or their communities and things we care about. And so it's my perspective, do something about it. Speaking of doing something about it, some academics write papers, and I'm making a broad generalization here, but you seem to be one that you are compelled to create things and put them out there and make them available. Again, with no disrespect to our paper writing academic colleagues and friends, everything that you're doing has a very tactile, we made it, we shipped it,
Starting point is 00:24:25 we created the code and it is available. Did that happen on purpose? Absolutely. Yeah, that's by design. We wanted to, again, the lab model changed the world. We want to have impact and the things that we have expertise in is we understand the human condition. We understand people. We understand technology. Then they learn policy and they understand culture. So if you look at those things and you say, well, here's a problem, what would be an intervention that could help? Even if it doesn't exist, what would that look like? I mean, we created a virtual traffic stop. That didn't exist. We invented that. It is real. And we can do those things. In the future, many years from now, I predict the virtual traffic stop is going to be integrated into every car.
Starting point is 00:25:19 It's just going to be in there. I kind of tilted my head. But at the same time, I also have a GPS and dynamic maps and, and, and, right? And now I also have a pocket supercomputer. My children and I were kind of looking at that meme that says, here's what you had in your pocket in the 90s. And it was like, you know, flashlights and cameras and video cameras and VHS tapes. And now you have your phone. And now we're on rolling computers effectively. So we're connected. We have cameras.
Starting point is 00:25:52 We have full support with our pocket supercomputers. There's no reason that that couldn't happen in a car. Yep. And it keeps people safe and both sides and it documents the interaction. Yes, you're right. We invent, and we translate, and deploy, and we evaluate with relevant stakeholders in the relevant context that matters. So, Applications Quest is happening right now. There's also things happening in the space of brain-computer interfaces. We're also seeing some billionaires and some other people out there who have ideas around how to do brain-computer interfaces.
Starting point is 00:26:31 What's your thinking about how we're going to interface with our machines in the future? Yeah, the brain-computer interface, or BCI, in 2015 here at the University of Florida, we held the world's first brain drone race. We raced drones with our thoughts. So you can go to braindronerace.com and look at the video. Some people saw it and said, oh, that's fake. They didn't do that. But we did it for real.
Starting point is 00:27:02 The reason we did it was, you're going to love this. The reason we did it, because it was really cool to do. That was one reason. But the other reason we did it was to show that the BCI is legit. It can be used. And so what's the future? Again, this is one of those things that's hard to say. We have some thoughts on where
Starting point is 00:27:25 we think BCI could go, but we wanted to at least make it understood that it's a viable technology, that people can use it. Now, I wanted to turn people loose on it, to go out and play with it and experiment with it. Yeah, it's been something that people have been trying to crack for a very, very long time, particularly the non-invasive part. Some people say you got to go, like human beings are big meat bags under pressure. And I'm a type one diabetic and I have an open source artificial pancreas. And I have at least three holes in me from being poked by systems to get insulin in or to get data out. And some people say to do the brain-computer interface, you got to poke the meat bag, get a needle into the brain.
Starting point is 00:28:12 But if we can do it from the outside, whether it be, you could imagine an earpiece or something with just a single lead or a couple of leads, it could completely change how we interact with our systems. That plus AI, things start getting really, really interesting. One of my former students actually did a dissertation on creating a sexy BCI, designing things that people would want to wear. Oh, I see. We looked at all aspects of it. Yeah, there's- Something fashionable that you would, something fashionable that you would not be like, this is a dork. Cause there was, there's a number of people out there who've been wearing helmets on campus and that may not be something that's going to
Starting point is 00:28:54 break into. Exactly. Exactly. So think about it for women with hair. So how do you incorporate a BCI? We, we did work on that. Very cool. It sounds like there's some amazing stuff happening at the Computing for Social Good lab down there at the University of Florida. Yes, we think so. Fantastic. Well, thank you so much, Dr. Juan Gilbert, for chatting with us today. Thank you for having me. You can learn all about the great work that they're doing at computingforsocialgood.com. Explore the research, explore the people, and learn more about the work that they're doing at the lab. This has been another episode of Hansel Minutes in association with the ACM ByteCast.
Starting point is 00:29:36 We'll see you again next week. ACM ByteCast is a production of the Association for Computing Machinery's Practitioner Board. To learn more about ACM and its activities, visit acm.org. For more information about this and other episodes, please do visit our website at learning.acm.org. That's B-Y-T-E-C-A-S-T. learning.acm.org slash bytecast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.