ACM ByteCast - Cecilia Aragon - Episode 75
Episode Date: September 30, 2025In this episode of ACM ByteCast, Bruke Kifle hosts ACM Distinguished Member Cecilia Aragon, Professor in the Department of Human Centered Design and Engineering and Director of the Human-Centered Data... Science Lab at the University of Washington (UW). She is the co-inventor (with Raimund Seidel) of the treap data structure, a binary search tree in which each node has both a key and a priority. She is also known for her work in data-intensive science and visual analytics of very large data sets, for which she received the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2008. Prior to her appointment at UW, she was a computer scientist and data scientist at Lawrence Berkeley National Laboratory and NASA Ames Research Center, and before that, an airshow and test pilot, entrepreneur, and member of the United States Aerobatic Team. She is a co-founder of Latinas in Computing. Cecilia shares her journey into computing, starting as a math major at Caltech with a love of the Lisp programming language, to vital work innovating data structures, visual analytics tools for astronomy (Sunfall), and augmented reality systems for aviation. She highlights the importance of making data science more human-centered and inclusive practices in design. Cecilia discusses her passion for broadening participation in computing for young people, a mission made more personal when she realized she was the first Latina full professor in the College of Engineering at UW. She also talks about Viata, a startup she co-founded with her son, applying visualization research from her lab to help people solve everyday travel planning challenges. We want to hear from you!
Transcript
Discussion (0)
This is ACM Bycast, a podcast series from the Association for Computing Machinery,
the world's largest education and scientific computing society.
We talk to researchers, practitioners, and innovators who are at the intersection of computing research and practice.
They share their experiences, the lessons they've learned, and their own visions for the future of computing.
I am your host, Brooke Kifle.
In today's world, data.
is everywhere. But as data grows bigger, faster, and more complex, one question becomes central.
How do humans actually make sense of it all? That's where human-centered data science comes in,
an emerging field that sits at the intersection of computer science, artificial intelligence,
and human-computer interaction. It's about designing algorithms, visualizations, and systems
that empower people, not overwhelm them, to explore massive data sets, extract insights,
and ultimately make better decisions.
Our guest today is Dr. Cecilia Aragon,
professor at the University of Washington
and director of the human-centered data science lab.
She has pioneered techniques that combine algorithms, visualization,
and human-centered design
to help people collaborate around massive, complex data.
Her contributions range from co-inventing the TRIP,
a foundational randomized data structure,
to developing Sunfall,
a visual analytics system that transformed
how astronomers discover supernova.
She has also built augmented reality visualization tools
that improved helicopter in hazardous conditions,
in addition to being a co-founder.
Professor Aragon is the recipient of the Presidential Early Career Award for Scientists and Engineers,
co-founder of Latinas in Computing,
and the first Latina full professor in her department at the University of Washington.
Professor Cecilia Aragon, welcome to ACM Bycass.
Thank you so much, Brooke, for that kind introduction.
I'm delighted to be here.
You know, you've had such an amazing life journey,
and I think you were even one of the first Latina to compete on the U.S.
Is it the unlimited aerobatic team?
Yeah, the first Latina pilot on the team.
You know, so it's an amazing journey from, you know, math and algorithms
to aviation and space and supernova discovery and ultimately human-centered data science.
So what inspired some of your early interest in computing?
And as you look back on your life journey,
what have been some of the key inflection points that have led you to where you are today?
That's a great question, having me looked back over the years.
But my path into computing actually started at Caltech where I was a math major.
And so this was back when Caltech didn't even have a computer.
science major. A friend taught me Lisp, and that was my first programming language. I was just
utterly captivated by it. It was so much fun to program. And then I realized how computing could
supercharge mathematics. So I was working on all these problems and theorems, particularly
in combinatorics and discrete mathematics. And I discovered I could write programs to solve complex
problems that would take me ages to work through manually. So that intersection of computational
power with mathematical thinking was like discovering a secret superpower. It was so exciting.
And that's when I decided that I would go into computing, even though I did have a professor
who told me, why are you doing that? Only failed mathematicians go into computer science.
So I kind of went against the wisdom of my elders there.
Very interesting.
And ultimately, the journey has led you to the current field that you're in.
You know, obviously you lead the human-centered data science lab.
But what does it mean to make data science human-centered?
Human-centered data science and human-centered AI really are all about putting human-centered
needs and values and ethics at the center of how we develop and deploy algorithmic systems.
Any system that's based on very large data sets, like the current generative AI systems now
are based on underlying very, very large data sets.
So the important part is to recognize the data science or any data-driven science isn't just
about statistics and computation, although those, of course, are important and necessary.
but we have to consider the societal impacts and the human context of increasing automation
in our society.
There are really human decisions at every stage of data science work.
And this is what we wrote about in our book, Human Center Data Science and Introduction.
And my co-authors and I emphasize that we really need to be intentional about managing,
you know, bias and inequality that can result from the choice.
we make as we develop algorithms. So, you know, for example, when we decide what data to collect
and how we clean it, which algorithms we use, and how to interpret the results, each of these
steps can embed systemic racism, sexism, and other forms of discrimination into automated
tools. And it is so important to consider all of this. So my approach draws from not only
data science and computation, but also human computer interaction and social science.
And it's super, super important to place data in its social context and really ensure that
the humans who are going to be affected by these systems are involved in their design
from the beginning and not just as an afterthought.
And as you think about drawing on different disciplines to inform your work,
How have your personal experiences maybe outside your role as an academic or as a researcher, whether that's, you know, as a pilot, maybe as, you know, one of the pioneering Latinas in your field, whether it be in academia or, you know, as a pilot on the aeromatic team, how have those experiences shaped how you think about human decision-making and designing these kinds of systems?
That's a great question.
Well, flying, particularly aerobatic flying, taught me a lot about designing systems for very high-stakes, high-stress environments.
So when you're performing a sequence of aerobatic maneuvers at 200 miles per hour, you know, pulling 9 Gs, there's really no room for error.
Every decision has to be precise and instantaneous.
And this experience directly influenced my approach to human-centered design.
So I learned that under stress, people really need systems that are intuitive, predictable,
and that support rather than hinder their natural decision-making processes.
Additionally, aviation taught me that small errors can cascade into catastrophic failures.
And this is something I applied to my data science and artificial intelligence work today.
Every algorithm that any of us writes has potential human impact.
And a small error can be magnified to affect thousands or millions or billions of people.
And so the techniques I used to overcome fear and build confidence that I used in flying,
breaking down complex tasks into manageable components, practicing systematically,
maintaining situational awareness, these are the same principles I now use in data science
and teach to my students where I, in every class,
embed the awareness of contextual thinking and ethical thinking. So when you're designing an
algorithm, you shouldn't be saying, oh yeah, an ethicist will look at this later on. You should
be thinking about the potential unintended consequences of your algorithm as you write it. And you
should keep that top of mind as you keep going. I think that's, you know, this has never been
more timely as we think about, you know, how prevalent a lot of AI technologies in modern day
sort of systems have become in industries like that we would have never expected, right?
Right.
But as I step back and think about your research, you're ultimately, you know, often asking,
how do humans make sense of overwhelming data?
And I think you said that was quite interesting in those high-stakes situations.
You want systems that support but not hinder decision-making.
So as you think about, you know, the modern day role of technology, which has, you know, rapidly accelerated, especially over the, you know, past five or ten years with AI and big data, why do you think this question has become so important now? And, you know, how does this even more so motivate your line of work?
Well, this question has become super critical today because we're at this pivotal moment, right, where data science and artificial intelligence are having these enormous life-changing impact on society and on billions of people's daily lives.
So this is why I think data science, data visualization, and human-centeredness have to be incorporated into artificial intelligence.
in a meaningful and thoughtful way.
And the fact is that visualization is a wonderful way
for humans to make sense at a glance, literally,
of millions of data points.
You know, they say a picture is worth a thousand words,
and that truism really applies to data visualization,
which is one particular branch of human-centered data science
that I've been involved in, especially developing collaborative visualizations and working on
visual analytics algorithms and human-centered design around them.
So I'm just going to give one example that many other people have brought up, but if you're
going to build a machine learning or artificial intelligence algorithm, you have to think about
the societal consequences. So, you know, with facial recognition systems, if you treat
it only as a statistical algorithm and you ignore the bias in training data sets, the bias in
developer teams, and even the bias in how humans choose to label data, you risk creating
systems that harm society.
And the real challenge isn't just that we have these vast amounts of data.
It's that developers and managers choices are involved at every stage of algorithm development.
I mean, we've seen this over and over again in the generative AI systems that have been released to the public,
is that the teams of developers didn't think about certain kind of obvious issues that were then discovered when they were released to the public.
And a lot of those choices are still embedded deep in the algorithms.
And I really think it's vitally important, well, it's vitally important for our society to just have teams that incorporate,
much more diversity, you know, more women, more people of color, more people from different
groups rather than kind of the stereotypical male software developer. And, you know,
it's something that managers need to put more effort into because I'm branching out from your
question, but I think it's relevant and important. So I'll give an example of an early experience I
had working as a software developer at NASA, we were building systems, you know, for the
space shuttle and for aerodynamics and for the Mars rovers. And it was a wonderful experience.
And so my team of people, half of us were female. All right. So we had a team, you know,
of five female software developers and five male software developers. And our boss was, you know,
a white male, but he understood that the most important thing was to hire the best people.
And he knew about looking into his own biases and just hiring the best people. So this is why
managers are critical. Down the hall for me, there was a team of software developers that was
100% male. And so I went to the boss of that team and I said, you know, why don't you
hire some more women? And he said, oh, I can't find any.
And I'm like, how can you not find anyone my boss was able to find some? And then I gave him the
resume of a very qualified woman. And yeah, she didn't get hired. So it really shouldn't matter
what your gender or what your race is. But we all have to be aware of societal biases and do our
best to overcome them. And that's an example that I think my first boss there did. And because of it,
he was able to find the best team. And our team was incredibly successful, more successful,
I should say, than the team of that other manager. And I would like to say that's really this sort
of segues, I think, into another question that I often get asked is, why are diverse teams
necessary? I mean, especially in, you know, today's day and age. And I think this is a great
example of why they are. There's been so much evidence that shows that, you know, see,
suites with more women on them, well, the companies tend to have higher earnings. It's just so important
because so much of what we as humans do is embedded into our unconscious biases. Even though we are
good people and we're doing our best to be ethical and fair, our unconscious biases get embedded
into the software we write, and it's embedded in the data we collect.
And so the artificial intelligence systems today, which are based on all the data that's out
there, not only are the algorithms embedding these unconscious biases, but the data embeds
these unconscious biases.
And so we as developers and managers of developer teams and, you know, future CEOs, we need
to be really, really aware of this. Even if we don't care about doing the right thing and about
justice, we need to do it to make our companies incredibly successful. So I think that's so
important. Anyway, I think you touched on a lot of very, very important points. I think there's
certainly a lot of research that backs the claim that, you know, more diverse teams are, you know,
ultimately more successful across different dimensions, however you might evaluate that,
whether that's team morale, whether that's financial performance, whether that's output.
And I think your point around a lot of the bias and fairness concerns around modern day
AI systems have, you know, we've seen many of these interesting cases, whether it be,
you know, face recognition technologies, you know, there was research from, you know,
MIT Media Lab with gender shades, you know, that showed some of the commercial grade face recognition
technologies that we're, you know, biasing against women and darker skin tones,
language, word embeddings, so general biases that we're seeing in gender associations
with specific roles. So I think we've seen many of these cases and I think certainly to
your point, having more diverse teams and representation in the rooms that are actually making
these product decisions and technology decisions is crucial. But there's also a lot of
technical decisions that we make around how we design the algorithms, the data that we
collect. So I think it's never been more important. And I think as, you know, you touched on a
very interesting point as CEOs or leaders of companies, these are things that we should
consider and prioritize. And I think that dovetails very nicely into my next question, which is
beyond your work as an academic and as a researcher, you've also embarked on a very
exciting journey of entrepreneurship. So I'd love to learn a bit more
about sort of the genesis of that story and your work of GEDA and, you know,
what really motivated you to take the leap from research into starting a company?
Well, thank you for asking that.
Well, I've had a lot of interest in entrepreneurship from a fairly early age.
I actually started a company many, many years ago before I became an academic.
But the reason I co-founded this company, Viata, is I think that it's important for technologists to also take roles beyond simply software development.
It's important for us to be decision makers that translate algorithms into real-world technologies that can be used to help,
society rather than hurt it. So I founded Viata with my son. And so I'm the CTO, the chief
technologist, and he's the CEO. And I think it's a fascinating example of how academic research
translates into real world applications. So the core technology behind Viata is a mapping visualization
tool that I developed in the human-centered data science lab at the University of Washington,
and we have two patents and several publications covering this work.
So what Viatta does is solve a common travel problem.
When you're planning a trip, you can easily find hotel prices, ratings, and availability,
but what you're often missing is understanding the travel time between your hotel and
all the destinations you want to visit or the venue.
where you want to plan a wedding and all the hotels that are around it
and how long it takes to get to the venue from those hotels.
And so let's suppose you're a venue, all right?
You're a venue where you, I mean, there are many potential uses,
but right now we're focusing on, say, you are a wedding venue site.
So we use artificial intelligence to generate a map,
the plots, hotels around this site based not just,
on price, but on their travel time to the venue. And it's really useful to people love it because
it makes event planning easy. Right now, it's really challenging and complex. All the information
is out there. So it's not like we're producing any new data. All the data exists, but we are
making it easy for users to comprehend. So it again fits in with this idea of taking vast amounts
of data and making it easily understandable to humans at a glass.
And anyway, it's just been really fun to start a company, you know, beyond the technical work.
It's really exciting to see how these interdisciplinary approaches that I use in my academic research
translating to building practical solutions for venues and for travelers.
And it's also been really tons of fun working with my,
son as co-founder. So, you know, when you're, when you've known your co-founder for literally your
entire life, that foundation is as rock solid as it could possibly be. So yeah. Well, that was,
that must certainly be awesome. I think that's really, really interesting. And I think you touched
on a very interesting point where, you know, being able to take this research or this work and
put it into the hands of real users. So I'm curious, as you reflect on that,
experience, do you see entrepreneurship as, you know, kind of an interesting avenue for making
computer computing more human-centered? You know, ultimately, it's a way for actually putting
ideas directly into people's hands. And so through that experience, you know, how has that
maybe informed how you think about your research and your approach as an academic?
Oh, absolutely. I think entrepreneurship absolutely is a way to make computing human-centered
because, you know, when you write a grant, you're beholden to the agencies that fund those grants
and you have to tailor it to, you know, what their specific desires are.
But really, if you, as an entrepreneur, if you build a product and it's really useful to people
and they love it and it helps them in a positive way, you know, it's going to improve the world.
If, of course, there's a flip side to that as an entrepreneur, you know, you can also get
subsumed into just going after revenue and ignore the ethical impacts of your products. And I think
we have seen this. And so this is why I want to get into the entrepreneurial space to show that
ethical products can end up being very successful. But also, I think that being an academic
is a great place to experiment with entrepreneurship because I have students who say,
well, do I want to go into industry or academia? And I said, well, you know, why not do both?
You know, the wonderful thing about being an entrepreneur and a professor is that you can take a leave
from your faculty job and go into building a tool. And if the tool fails, you're not going to
lose your house, right? You can go back to work as a faculty member and you can start working on the
next idea. So it's a great incubator. I mean, I think universities in general are great incubators
for ideas. They drive the business engine in the United States and the world. And that's
something that really needs to be understood today is that universities have created so much that
has gone directly into making companies successful and universities are the economic engine,
the powers capitalism. And that's often not understood by students or by the general public.
So I think it's incredibly important to, you know, both on a practical level,
but also in terms of what is it going to take to keep the United States in its position of
economic growth and to keep it as an economic powerhouse. It's we need to support.
our universities and fund our universities so much more because without that, we're going to
lose our technological leadership position in the world. That's a very important point, and I think
it's personally quite exciting to see that your role as an entrepreneur informs your approach
as an academic and researcher and vice versa, right? You think about research and academics also
equally informs your approach to entrepreneurship. And so, you know, obviously this, as you highlighted,
is not your first foray into entrepreneurship. But I'm curious, as you've embarked on this journey,
what's maybe one thing that's surprised you or has been quite different from your work in academia?
You mean as an entrepreneur? Yep. Well, one thing is perhaps how much patience is required.
patience and persistence. There's so much that you need to do. You need to stretch yourself in ways that
perhaps you don't have to in academia. For example, I'm a very shy and timid person naturally.
So going out to do customer discovery was a real challenge for me. But maybe the way I was afraid of
flying at first and I forced myself to do it and then I learned from it, entrepreneurship has
pushed me into learning more about myself
into learning more about
what I can accomplish
collaboratively with other people
and the true advantage
of human networks.
And it's been really exciting.
ACM Bytecast is available
on Apple Podcasts, Google Podcasts,
Podbean, Spotify, Stitcher, and Tunein.
If you're enjoying this episode,
please subscribe and leave us a review
on your favorite platform.
You know, that's certainly a very important key takeaway.
You know, I'd love to maybe at a high level touch on some of your core contributions as an academic.
You know, you've done a number of things, and maybe we won't have too much time to go into everything in detail.
But, you know, going back from your time during your PhD, you know, you developed an AR system that, you know, assisted helicopter pilots in hazardous situations, right?
You've done a lot of work in data structures, and you're the cone vector of the treat data structure.
We've also done some interesting work with Sunfall, building systems for astronomers.
And so as you think about your experience building out these different solutions and systems,
how did you weave together principles from machine learning, visualization, collaboration,
into ultimately one tool or one product?
And as you look back, you know, do you remember a moment where you thought, you know, wow, this is why I built this or why I invented this?
All right. Okay. So that's kind of a lot of questions. I'll roll into one. Let me start with that. So the common thread among all my work has been this, you know, human-centered data science and human-centered machine learning. How do we, you know, use algorithms.
to support human needs.
So the helicopter pilots and invisible airflow hazards
was a great example of that.
So there was a very critical safety problem
that I became involved in
that many aircraft accidents are caused
by encounters with invisible airflow hazards near the ground.
You know, like vortices, down drafts,
wind shear, microbursts.
And helicopters are especially vulnerable
because they often operate in confined spaces under operationally stressful conditions.
So our augmented reality system actually overlaid real-time airflow visualization
onto the pilot's view in a heads-up display.
So it essentially made the invisible visible.
And when we tested it in a high-fidelity flight simulator, the results were dramatic.
It significantly reduced crash rates among pilots and improved their ability to land safely
in turbulent conditions.
So what I did is I used this algorithmic tool that was specifically focused on a human need
and a critical human safety need.
And then the next question you asked was about trepes.
All right.
This was during my PhD as well.
It was a data structure that my colleague, Raymond Seidel, and I invented.
and it combines the best features of binary search trees with the power of randomization.
So the name comes from combining tree and heat.
It's a really cool idea, I think, because traditional binary search trees can become
unbalanced, and that leads to poor performance.
And there are many techniques for rebalancing trees that work very well, but they tend to be
complicated.
And the tree uses randomness to make this very simple.
and take very few lines of code.
And I talked to this one software engineer
who actually implemented it
in his production system.
And he said he deleted about 500 lines
of bug-prone code
using a different balancing algorithm,
and he replaced it with about 50 lines
that implemented tripes.
So it's much less prone to software error.
And by using randomness,
it has the expected time to balance the trees
is just the same as the best
much more complicated systems.
So that's Treep's.
All right, and then you also ask about sunfall.
So this was also a really cool system
working on an incredibly exciting project, all right?
So the nearby supernova factory was searching for
this very rare type of supernova that could be used
to measure the expansion of the universe.
So this is one of the grand challenge problems in astrophysics.
and how to study this, you know, what's going on with the equation of state of the universe
today. And so the challenge was just, at the time, it was really beyond the data capabilities
of the software tools they were using. They had to basically explore about 500 potential images
to find one true supernova discovery. And they had some simple tools that were, that were, that
were doing this, but it was when I joined the project, there was a team of six people that
was spending four hours a day just manually going through these images. And they had to do it
every day because the software ran overnight to find the supernova. And they properly thought
that it could be automated. But what they didn't realize is that it needed a human-centered
approach to artificial intelligence. So two teams of computer scientists that just took an algorithmic
approach had already tried and failed to build the tool they needed. And then they hired me.
No, it's like, no pressure, Cecilia, we want you to do this. I'm like, ah. So what I did,
rather than simply automating the process, is I started out by, you know, using tools like
contextual analysis and tools from sociology, you know, and I took an ethnographic approach.
And using that, I informed my software development to build the tool that used machine learning
to process these large amounts of data. And additionally, augmented human insight. So they did not
automate away the human insight. I discovered which parts of the problem needed human insight
and which parts could be automated by algorithms. And the result was really dramatic. We went from
six people working four hours a day on basically grunt work to one person working an hour a day.
And the scientists then could turn their attention to solving science problems.
And yeah, and so they started producing lots of papers, lots of data, lots of results.
And it was just a tremendous success.
So did I answer all your questions?
Definitely. I think a couple common threads, right, which I think you highlighted,
earlier on but you know the core objective being how do we make better sense of information or data
and whether that's building tools whether that's building new data structures or inventing new
data structures whether that's building visualization systems i think there's a very interesting
common theme around how do we take the best of an interdisciplinary approach so i really your point
at least around sunfall on coming in and revisiting the same problem that has been approached and
attempted to be solved by others and taking a slightly different approach, pulling on some
key learnings and best practices from other areas of research and academic.
And so I think there's a very interesting point, whether it's data structures or algorithms or
visualization, how can we take an interdisciplinary approach to revisit and resolve some of the
same problems?
Absolutely.
I think that's completely true.
And this is, I mean, I think that there are many, many of the problems we're facing today with
artificial intelligence, generative artificial intelligence.
I would love to see more attention paid to hiring people on these teams that have both a
background in computer science and algorithm development and AI and also have a history of taking
a human-centered social science approach to these difficult technical problems.
And I would love to see more people trained in sort of a dual approach like this,
and especially managers looking not just for narrow technical roles, but looking for people
who have the vision to come in and understand the technology, but also apply true human
needs to it. And I think that's a problem that's going to be solved by the sea suites of technology
companies today. And I wish that they would put more work into this. I mean, I think it's an
exciting and important problem. And I would love to see, you know, if I think if you or any of the
listeners, you know, do get into roles in industry where they have the power to hire people that
they consider hiring people with this, you know, multi-pronged interdisciplinary background,
people who've demonstrated they can make contributions both technically and socially.
And I think that's what I try to teach my students today.
And, you know, I think, and all of them, when my PhD students and my undergrad students
have gone off into industry, they've tended to be very successful because of this
multi-pronged interdisciplinary approach to technological problems, and it's been pretty exciting
to see. Yeah, I think it's not only a nice to have for modern-day organizations, but I think it's
a critical need, right? Absolutely, yes. The same viewpoint. I'd love to maybe turn the last segment
on some of your work around community and inclusion. You know, obviously you were the first
Latina full professor in your department at...
Not just in my department in the entire College of Engineering at the University of Washington.
Oh, wow. Okay.
Yeah, 100-year history.
And we have a very large college of engineering,
and it surprised me to be the first Latina professor to be promoted to full.
But the good news is that since I was, there have been two other Latinas who have been
promoted to full professor in the College of Engineering.
So yes.
We're making fun.
There we go, paving the way.
But of course, even beyond paving the way, you've also done a lot to build community.
You know, going back to some of the entrepreneurial interests, you co-founded Latina's
in Computing.
So as you think about your journey as an academic, there's also this dual role of your personal
identity as a Latina.
So at least for you, what role do, you know, community and representation play and build
the future of tech. And then on the personal side, you know, how has advocacy and community
building intersected with your professional career? I hope I can remember both questions at once.
But yeah, let me start with community and representation. So I feel that these two aspects
are really fundamental to building technology that serves all of humanity. When we have
homogenous teams building systems, we inevitably embeds.
these biases, blind spots, and limited perspectives of those teams into the technology itself.
And Latinas in computing was born from recognizing that isolation is one of the biggest barriers
to success for really the brightest people that we need to have building technology,
but that maybe have turned away from it because of a lack of community or a feeling that
there might be hostility or that they don't belong in these technology groups. And this is one thing
that I feel is so important is that what we want in this world is to have the smartest people
building the technology. And the way to get the smartest people in there is by not limiting it
to only the type of people who are already there. So if you say, well, we live in a meritocracy,
Well, we really don't.
We want to build a true meritocracy
where the people who will do the best job
of building a system that serves all of humanity
are on the project.
And right now, I've seen too many people
from underrepresented groups
who are incredibly brilliant,
but they say, you know, I don't think this is right for me.
I had a professor that discouraged me,
I applied to this job and everybody else did not look like me, and so I felt I wouldn't belong.
And those biases are hindering a meritocracy from functioning correctly.
What we really need to do is make sure that we support all people.
And so for me, because I'm a Latina, it's natural that I want to support other Latinas.
but I also recognize that I have biases.
And what's kind of shocking to me is that when I was in a position as a hiring manager,
I found that I was biased against women in Latinas, which is really terrible, right?
And I think it's because of our society.
And so what I had to do is work to overcome those biases in myself.
And I think that's something we all have to do because what I wanted to do was hire,
the best people. And what I found is that, you know, when I was quickly running through resumes,
you know, or today if I was using an AI tool to quickly run through resumes, because of societal
bias, it would tend to screen out some of the best people who just happen to be female or Latina
or black or a person of color or anything that, or older. So what happens is that if you use
an artificial intelligence tool to screen resumes, you're going to reflect the existing
biases. If you rely on your own gut feeling, you're going to screen out the best people
because of your own biases. So what we really need is a conscious awareness of these biases,
and we need to build algorithms that incorporate an understanding of this and show it to humans
and say, instead of just saying, give me the best person, you know, if I did this now with
AI, I said, screen these 200 resumes and give me the best person, it would probably, you know,
for a technology job, it would probably spit out something, you know, it would use all these,
all these features that are based on existing societal biases and it would spit out
a resume that was more conventional. And what I would like to see is an algorithm,
tool that ideally removes the biases, but I don't know yet how to build such a tool.
I don't think anybody has built one yet. It would be really cool to work on a team that was
dedicated to removing biases in artificial intelligence. I would love to do that.
But right now, I think that's still an open question. And to do that, you need to have representation
from people who truly understand the corrosive effect of bias.
in society. And it doesn't matter whether the people running this team, what gender or race
they are from, but they must build a diverse team that is diverse not only in race and gender and
skill set, but they have to understand how to incorporate all of this and how to lead a team
that enables the best use of technology to improve our society going forward. And I think that is
possible. It's possible with artificial intelligence, but I don't know if the current system that's
based on just focusing on uptake and financial use is really going in that direction. I think
we need to go beyond that. And I would love to see universities and companies devoting large
amounts of effort to doing this, because I think in the long run, this is what will build the best
systems. And so there's many people researching this field, and they have often shown that
if you just try to do automation for automation's sake that doesn't incorporate human needs,
they end up with something that may appear to be more efficient or less costly in the
beginning, but then the system will generate problems later on, which will lead to higher
costs for that organization in the long run, like lawsuits because of, you know, problems they
didn't expect. And the way forward is to build in the understanding of human needs from the
ground up. It cannot be slapped on at the end, you know, after all the algorithms have been
built. And gosh, that is so important. I mean, I keep thinking this is maybe my next
entrepreneurial project is to build a team that can do this, because I think we finally,
have the technology, you know, we're generated AI, which is incredibly exciting. We have it,
and we can build a truly unbiased algorithmic data. We can do it, but it's just a matter of
the leadership that needs to have the will to go ahead and do this and the vision to talk
about, you know, moving forward with it. So, yeah, this is maybe my next project. And I hope somebody,
it doesn't have to be me. It could...
I hope somebody else does it too.
You know, it's just so, so important.
And I don't see any companies really focusing on this at this point.
Very interesting.
And maybe just building off that a bit more, as you look ahead, you know, to maybe the next,
I think saying the next decade may be a bit difficult because it's so hard to see where
this field is evolving and going.
But maybe even the next three to five years, where do you see some of the biggest
opportunities for human-centered AI and data science. I think you raise a lot of interesting
points around how we can think about quote-unquote debiasing these systems. Do you see it at the
foundational layer, at the application layer? Do you see, you know, there are certainly a lot of
companies focused on AI safety, on AI fairness. So what are some of the biggest opportunities
or open questions that you think remain as you think about the field of human-centered AI and
data science. I think the biggest opportunities lie at the intersection of technical capability
and social responsibility. So I see enormous potential in collaborative intelligence systems.
So technologies amplify a group of humans' capabilities rather than replacing them. I would like to
see computational power joined with human wisdom. I think, you know, join with human wisdom. I think,
we need to focus on democratizing access to data science tools and AI literacy, because the
communities that are most affected by algorithmic biases are often those with the least power
to influence how these systems are designed. So changing that dynamic is really both a technical
and a social challenge. It's not just about saying, I'm going to build a de-biased system.
It's about, I'm going to put together a team, a very heterogeneous team of people with all these different skills, and we're going to focus on this as a grand challenge.
I think this is the next grand challenge in artificial intelligence is truly making artificial intelligence human-centered.
And I really feel that, I mean, if I were a young person going into the field today, I would say I'm going to focus on human-centered.
artificial intelligence because I think that has the greatest potential, both for the technological
impact and for the social impact. And I would focus on developing skills in both areas.
I think it's really important not to just say, we're going to hire an ethicist, we're going to
hire a software developer, and we're going to put them together. That's not enough. We have to have
people who have deep understandings of both of these areas.
I think you actually hit the nail on the head on what was going to be my final question,
which is any advice you give to folks who are looking to pursue a career in computing,
especially maybe those from underrepresented backgrounds.
And I think your point around really investing in not just the technical chops,
but also the sort of holistic systems level understanding or a social,
sociotechnical understanding of how these technologies work and intersect with society and humanity,
I think is an important piece. But maybe any other personal advice from your career and your journey,
I think you also raise some interesting points around biases hindering folks from maybe entering the
field. And so maybe some work to overcome some of those personal biases that maybe get in the
way of folks feeling like they have a seat at the table or a space of.
in the room. But yeah, open to maybe any other final parting advice or final parting words for
some of our listeners, especially those interested in pursuing current computing.
All right. So my final words would be as follows. Okay. So first, find your community. It is really
difficult to do this hard technical work if you don't have support. So don't try to go it alone.
Find people who share your interests, and they can be in person or online.
And second, remember the computing is fundamentally about solving problems the matter to you.
Don't get trapped into thinking there's only one way to be successful in tech.
I tell my students this.
Your unique perspective and lived experiences are assets, not obstacles.
Third, develop confidence in your abilities.
I spent years letting fear and imposter syndrome hold me back from doing what I loved,
which was building really cool technological systems and algorithms.
And learning to fly taught me that I could face any challenge by breaking it down
into manageable steps and practicing systematically.
This same approach works for mastering any technical skill.
And then finally, and maybe most importantly, as you grow in your career, reach back and help
others. You know, the representation battles that we are fighting today, I mean, they can seem
it's remarkable, we can create opportunities for the next generation. And that is so important
to me. I mean, mentoring others gives me a purpose in life. And I think that is something that I think
many people that I talk to at least really want to do. They don't want to go into just for
personal goals. They want to be a part of society and help others succeed. And so just remember
to reach back and help others. Don't let advice. Sometimes I think there's well-meaning advice
given to young people that says just focus on yourself and don't try to mentor anyone else until
after you get the PhD, until after you get tenure. And I think that's a mistake. I could talk about
at that length. I mean, I will say that by mentoring others,
it has actually made me more successful in my own career.
But that's not why I did it.
I did it because it helps all of us.
It helps our community.
It helps our world when we reach back and help other people.
I think those are very beautiful parting words of wisdom and advice.
So find your community, solve problems that are important to you,
be confident in your ability, and always leave the door open for others to follow,
and be sure to reach back.
So I think those are wonderful nuggets for our listeners.
Professor Cecilia, I think we've learned a lot about your work
in building bridges between people and data,
whether it's through new data structures,
whether it's powerful tools or immersive systems.
I think the common thread and at the heart of your work
is really a vision for human-centered data science
and human-centered technology that really empowers people
in society more broadly to make.
sense of the massive amounts of information that exists in the world today.
And so I think as technology continues to grow in its importance, I think your work and
your contributions will continue to be even more and more important.
So thank you for joining us on BITCAST, and thank you for all of your amazing contributions.
And thank you so much for having me and for your insightful questions, Brooke.
I have really enjoyed talking with you.
This was a lot of fun.
Amazing. Thank you so much, Professor.
ACM Bytecast is a production of the Association for Computing Machinery's Practitioner Board.
To learn more about ACM and its activities, visit acm.org.
For more information about this and other episodes, please visit our website at learning.acm.org
slash B-Y-T-E-C-A-S-T.
That's learning.
dot acm.org
slash bikecast