ACM ByteCast - Nashlie Sephus - Episode 59
Episode Date: October 23, 2024In this episode of ACM ByteCast, Rashmi Mohan hosts Nashlie Sephus, Principal Tech Evangelist for Amazon AI focusing on fairness and identifying biases at AWS AI. She formerly led the Amazon Visual Se...arch team in Atlanta, which launched visual search for replacement parts on Amazon Shopping using technology developed at her former start-up Partpic (acquired by Amazon), where she was the CTO. She is also CEO of Bean Path, a nonprofit startup developing the Jackson Tech District, a planned community and business incubator in Jackson, Mississippi. Nashlie earned her PhD from the School of Electrical and Computer Engineering at the Georgia Institute of Technology, where her core research areas were digital signal processing, ML, and computer engineering. She has been featured in The Atlanta Journal-Constitution, CBS kids’ show Mission Unstoppable, Black Enterprise, Ebony, Amazon Science, AWS re:Invent, Afrotech, and Your First Million podcast, among others. She also serves on several start-up and academic advisory boards along with mentoring others and investing in Atlanta-based start-ups. Her honors and recognitions include the BEYA 2024 Black Engineer of the Year Award, Mississippi Top 50, 2019 Ada Lovelace Award, and Georgia Tech Top 40 Under 40. Nashlie describes her early love for mathematics and music and how these informed her later doctoral research in digital signal processing in music data mining. She shares a personal experience that deeply influenced her work in AI, particularly in responsible AI and fairness, which eventually led her to her current role mitigating bias at Amazon, notably in facial recognition technologies. Nashlie and Rashmi discuss the importance of building diverse teams to practicing responsible AI and building sound products, as well as collaboration with open consortia and organizations such as the Algorithmic Justice League and Black in AI. Nashlie describes the inception and growth of Partpic, an app she started developing while finishing school. She also talks about BeanPath, her nonprofit organization with a mission to bridge the tech gap in Jackson, Mississippi through makerspaces, networking, and community engagement. Links: BeanPath
Transcript
Discussion (0)
This is ACM ByteCast, a podcast series from the Association for Computing Machinery,
the world's largest educational and scientific computing society.
We talk to researchers, practitioners, and innovators
who are at the intersection of computing research and practice.
They share their experiences, the lessons they've learned,
and their own visions for the future of computing.
I am your host, Rashmi Mohan.
You can't be what you don't see. There is nobody who lives by that quote more than our next guest.
An inspiration, a role model, Dr. Nashlee Cephas is a computer scientist and an expert in artificial
intelligence. Having been a beneficiary of strong
mentorship and guidance, she pays it forward on a daily basis. Dr. Cephas serves as a technology
evangelist at Amazon Web Services AI, where she works on eliminating bias in machine learning
models. She was CTO at PartPic, a technology startup working on visual search that was
eventually acquired by Amazon,
where she led the visual search and AR team as an applied scientist until recently.
She is the co-founder and CEO of BeanPath, a nonprofit organization committed to transforming
Jackson, Mississippi into a tech hub. Dr. Cephas is also a prolific speaker
and was awarded the Ada Lovelace Award in 2019.
Nashlee, welcome to ACM ByteCast.
Awesome.
Thank you so much for having me.
Entirely our pleasure.
I'd like to lead with a simple question that I ask all my guests, Nashlee, which is, if you could please introduce yourself and talk about what you currently do, and also give
us a little bit of insight into what drew you into this field of work.
Absolutely.
So I'll say by day, I'm a principal AI scientist at Amazon and my team focuses on responsible AI.
So we do a lot of testing evaluations on all of our AI models for all the different web
services or cloud computing offerings that AWS has, which is a lot. And so it covers areas like
NLP, voice, image, video, also text, time series data, so many different types of data and different
applications that people are using our services for. So we have to make sure that we understand all the use cases
as much as possible, and we're being transparent about how well it works. And so I say by night
and weekends, I'm a CEO and founder of a nonprofit based in Jackson, Mississippi,
that's doing some amazing work in the community to help people get access and exposure to tools, knowledge, and
the network to help strengthen that tech ecosystem and bridge the tech gap in places like Jackson,
Mississippi, where we're based.
Fantastic.
You'd have to tell us how you have more than 24 hours in a day to do all of this.
We're all seeking that answer.
But it's so fascinating to hear about your work.
We'll talk seeking that answer. But it's so fascinating to hear about your work. We'll talk more about that.
But what I'd love to hear, Nashlee, is why the interest in computer science?
What brought you into it?
I know that you got into it when you were in middle and high school.
So what was the motivation?
Oh, man.
I remember just really being interested in math growing up.
That was always my top performing class.
I also was a musician, played the piano,
played percussion. And so there's a lot of math actually involved in being a musician. And so
I believe that along with my upbringing, I always say I grew up in a house full of women. And so
my mother, my grandmother, my sister, we all had to be very hands on with things.
Right. So there wasn't like a man you could say, hey, take the trash out.
We had to take the trash out ourselves. We had to mow the yard and we had to.
We were doing things like changing ceiling fans. And I just remember all that hands on work.
It really made me interested in how do things work and how do we fix them?
You know, money was tight, so you can't always call someone to come and
repair things. Sometimes you just got to do it yourself. And so I really believe that that led
me to an interest in something I didn't know what it was called until the summer after my eighth
grade year, my eighth grade teacher, she sent me to this engineering camp for girls. It actually was
at the time sponsored by the Society of Women Engineers. And it was located at Mississippi
State University where I ultimately ended up going to undergrad. And I remember that it was just
amazing. It just blew my mind and all the things you could do learning about all the different
types of engineering, especially computer engineering, which is a little bit of both of electrical engineering and computer science.
And I thought it was so cool how you could type these letters and numbers into the computer
and it could control pretty much anything. And literally, I know that eventually the world will
run off of these computers and computer programming language, you know, but it was just fascinating to me at the time. That's amazing to hear, Nashlee. I was reading
a little bit about your background. And I know, I mean, in terms of your bio, and I know in your
free time, you said you, you know, you like to DIY. And I was like, I was meaning to ask you,
like, where did that interest come from? And now it's pretty obvious. I mean, I think, you know,
hands on experience, whether that is just by need or by
interest, is possibly the best way to learn about how things work and truly a mark of an engineer.
I also find it very fascinating that you talk about math and music, because one of the things
that I also read about was that your PhD dissertation was in a field, and I think you
worked on digital signal processing in music data mining.
That's not very common for a computer scientist either.
So I'm very intrigued by that.
And I would love to hear and love for our listeners to hear about
how did that come about?
Why did you pick that topic?
And what kind of key problems did you encounter while working on your PhD?
Oh, wow.
You really did your research too, because no one speaks about my dissertation.
It was so long ago, but you're right. It was kind of like a culmination of my music background and
signal processing. At the time, digital signal processing was kind of like the core and the
basis of machine learning and how do we extract these features out of signals and be able to better recognize what's in the signal without having to, you know,
actually hear it or actually having to interact with it. And so I remember at this time when I
started grad school, I was actually really into the Shazam app. I just, I thought it was so cool
that you could use that app and listen to any song and it could tell you what the song was now at the time. This was brand new
So it was like this wasn't anything anybody was used to and I said, wow, how does this thing work? And I found out
you know, it's based on pattern recognition and machine learning and
information retrieval and music signals and so
That's what I was interested in. And so
at this time, I was starting grad school at Georgia Tech. I needed to pick a topic.
So all the grad students out there understand that is a very daunting process sometimes.
But I had to just go with my passion. I mean, I have to do things that I believe in, do things
that I'm passionate about, because that's what keeps me motivated, especially when it gets tough. And so if you know anything about getting a PhD, especially getting
one at Georgia Tech or any top engineering school, it's very difficult. It's not for the faint
of the weak. And I think it really made a stronger person out of me to go through that.
But because it was focused on music information retrieval and getting,
figuring out how to use frequency analysis to understand what's in an audio signal and also
how do you separate these different sources, which we call it source separation in the signal
processing world. And how do you take that and be able to make emphasis about those type of signals, you know, in the future?
And so that was kind of what my thesis was based on.
There's definitely so many lessons in there, Nashki, based on what you said.
You know, one is getting a PhD is not simple.
A large part of your PhD time is spent on trying to determine what problem you want to solve.
The second thing is
around passion, right? Picking a subject that you're passionate about so that when you have
really tough times, you can really lean in and say, okay, I really enjoy this subject. And so
I'm going to stick through this hard part and try and get through to the other side.
So it's pretty amazing that you were able to combine those two and find and have the grit to be able to sort of, you know, work through that process and really enjoy, you know, your graduate
school journey.
And then, so that was probably the introduction to sort of just, you know, moving towards
AI and, you know, I'd love to see how did that sort of translate into, you know, talking
mostly about bias.
But I also, I mean, I'd love to
understand what was your journey. So let's start from there, right? What happened after the PhD?
Yeah. So actually, even before I finished the PhD, AI, machine learning, it actually wasn't
even referred to as AI as much at the time. But machine learning, pattern recognition,
those same techniques that I would apply to audio
signals and music signals, you could apply to speech. You could apply to video. You can apply
it to images. It's just adding more dimensions to the data set, but the techniques are actually the
same. And so when I found that out, because I had to do other
work for my PhD, I did work in all those other areas, including even brain signals. And so it's
just amazing how, you know, it's all math at the end of the day. The data, once you format it the
correct way, it's exactly the same math to find those patterns and different
techniques that you can find that work better for some types of data than other types of data.
And that's when machine learning was born. And so somewhere between me finishing my PhD
and starting to work at the startup company, I figured out, started doing research and learning more about
the different Gaussian mixture models and the large image models that at the time ImageNet
and some of these other algorithms, convolutional neural networks were reborn. And these things were
reborn. A lot of people credit this to Professor Jan Lacone and his research.
And I believe that he was able to basically find a way to train these models a lot quicker than what it used to take.
And so now when you have quicker training, you can also train something on your laptop.
You can do it. At that time, we were training simple,
smaller scale models. It would take days. It would take almost a week. Now, fast forward to now,
we can train simple models within minutes with the computational power that we now have,
and also moving to the cloud. And so before even some of the work that I did in grad school,
we had a server in the lab.
And that's a thing of the past, at least at this time.
Now you have so many computational power in the data centers, whether it be AWS or Google Cloud, that you can now process things a lot faster.
And so those two things together, speeding up the model and also being able to have processing power that wasn't
necessarily local.
If you didn't have access to it, you can also get it on the cloud.
Those two things really sped up the research of AI.
And so now you start to see all sorts of applications that are not just in the journal publications,
but they're in the consumer world too. And so it's pretty remarkable
when you look at that whole journey of how we got to where we are now with AI and even improved
interfaces that makes it where my grandmother can literally use AI. Those are very, very valid
points, Ashley, which you talk about, right? I think eliminating the infrastructure issues as
well as the processing speed and then then speed of the models themselves,
allows you to focus on solving, you know, these applied problems that are real world issues that
can be then, you know, provide solutions for consumers. How did you get into the, I mean,
I will talk about your startup in a bit. But how did you get into, you know, particularly interested in eliminating bias in AI? So we talked about data sets, we talked about, you know, the
ability to process them in a much sort of more expedient manner. But I know that, you know,
you're particularly interested in eliminating bias, you've spoken about, say, personal experiences
with products not working well for you. Was that a motivation? Would you care to elaborate?
Sure, sure. So I do have a quick story. So when you think about biases in technology,
and one of them, I remember back in, I think it was around 2008, 2007, I was finishing up undergrad.
I was doing a longer internship in between undergrad and grad school. I was working for a company that designed the, I actually was working at Delphi as a company that
designed the Bluetooth radios and the Toyota Camrys and some of the other car systems. And so
we would have to test the voice recognition because, you know, back then we used to have to, you know, dial numbers
by speaking them into the microphone in the car system. And so they would always use my voice
to test the system because I had a Southern accent and, you know, this was in the Midwest
at the time. And so it would always get certain numbers wrong when I would say them. And so they
would literally use me to test
the system that I was working on. It was pretty funny, but I was like, yeah, this thing is pretty
much useless to me. If I can't, you know, if it can't understand certain numbers, I can't literally
dial my own number or dial my home number or my mom's number. And so it really makes you think
back to, you know, when people design these systems, you know, who do they have in mind?
And so it's come full circle now that I focus on responsible AI and fairness and evaluating biases, unwanted biases in AI.
Now, even at Amazon as a scientist there. And that actually came about during 2019 when I was actually switching from one team over to another team that now focused on face technologies.
And at this time, there was a paper called the Gender Shades paper that pointed out some of the disparities in some of these commercial algorithms and services, including Amazon. So I was called to that team to investigate and help,
which we were able to mitigate those biases and disparities in certain groups and also understand
what was causing the bias. And so it was actually, it was something that we didn't even
expect. It was actually hair length. So it can be things and biases and systems that you don't even
know, you wouldn't even know to test for.
And there's a methodology that you use to actually go about how you evaluate systems. And so we were able to improve upon that system and that process and then take that same methodology and apply it to all of our other systems.
So at the time, we focused on this team and then we built a team that focused on all of the products as a whole. And we were
able to move forward building more methodologies, which now today we focus on many different,
we call dimensions under responsible AI, including fairness, transparency, governance,
explainability, robustness. There's also how do you check with the age of generative
AI now? How do you check for veracity and against hallucinations and copyright information? And so
there's so many things that go underneath that umbrella. And back then, all we were concerned
with was security and privacy in terms of even cybersecurity, but it's a whole nother realm now that we
have to understand that we make sure that we look into.
And how do you hold these companies accountable for these products?
Not just the large ones, but the smaller startups, because one day, you know, those startups
are going to be bigger companies too.
And so working together with consumers, with public policy, with government, we can figure out how to better enforce and regulate these technologies and compliances.
Yeah, no, absolutely.
I think, you know, I also found it funny, though, you were talking about the Bluetooth radio in your car.
I mean, it's such a testament to having a diverse team, right? And building products for, you know, I mean, having you on that team very quickly helped
them identify that the product was not built for everybody, which is fascinating.
We talk a lot about diversity, but I think this is, you know, such a key example to show
the valid, you know, the need for us to have a group of people who are building a product
that actually have very
different perspectives. Yeah, absolutely. I would love to hear more about, you know,
when you talk about responsible AI, and Ashley, you also spoke about the fact that, you know,
we need to have governance around this. So in terms of, I mean, one, you know, you work for,
you know, an organization, and obviously, you're building tools to help sort of, you know,
eliminate bias within your own products. Are you also involved in sort of like more of an
open consortium where you're starting to define these, you know, the guardrails for responsible
AI and sort of having, are you seeing interest from other groups and other organizations as well?
Oh, yeah, there's so much interest now. A lot of it is credited to
that paper, Gender Shades, the authors Joy Bulamwani, Timid Jabrou, and a lot of the work
that was done in that timing with her organization, the Algorithmic Justice League. We also have an
organization called Black NAI that does a lot of the conferences.
They have workshops at a lot of the conferences, such as the NeurIPS conference.
I've seen also specific tracks dedicated to responsible AI at larger conferences like the CBPR, the Computer Vision Conference, the ICML, ACM even.
And so a lot of these areas are primitive. So there's so much
more room for research to be done in these areas. And also thinking about the larger companies,
for example, at Amazon and AWS, we still publish papers. We still have my team actually consists of scientists and fellows who we call them fellows or scholars who are actually professors who have joint positions at universities and in industry.
And so they're able to, you know, leverage the theoretical background and research and also apply it to the practical products that actually get deployed to consumers
every day. And I think that is a very interesting intersection because what's in theory is in theory
and what's in practice can be different than that. But a lot of times it is influenced. So the more
that we work together and talk about it, I think is better. Absolutely. I mean, I can also imagine the
immense benefit that students of some of these professors might have in basically just having
a foot in sort of the real world and, you know, in the, in like the applications of the theories
that they are developing. That sounds like an amazing sort of, you know, collaboration.
Yeah. One of the other things that I did want to talk about,
Nashlee, was also in terms of, you know, when you're talking about eliminating bias in data,
I know you have also spoken about just the data set lifecycle, right? So are there any best
practices that you would like to share in terms of, you know, just keeping training data fair
and free of bias? How do you sort of continuously learn and counter your own
biases? Like, are there tools available? What would you suggest is a good way to kind of think
about that problem? Absolutely. So a good way to test for biases in your data, we often say,
look at not just the groups, the different groupings of the data, but look at the subgroupings of the data. And so that can be, for example, women who live in the U.S. who are non-native English speakers,
or that can be people who live in a certain zip code who make a certain amount of money.
And so that's really where you find some of your disparities when you look at the error
rates amongst those subgroupings and not just the high level groupings. And so sometimes the deeper
you dig, the more you'll find. And I think in many cases, it's fine. Often it's fine to have biases
as long as you're transparent about it. If you're selling a product and being transparent about the intended uses,
the unintended uses, that's something that we call a model card or an AWS, we call it our service
card. It basically explains, hey, we tested this model and this is the type of data set we tested
on. This is what we found to be best case, best practices for this particular model to hopefully inform the user
about how to best use that product, because it's very difficult to get even error rates across
every single group. And so oftentimes you will have bias, but as long as it's not unwanted bias,
you know, you're usually okay. I think also sanitization of the data is very important.
Again, AI is all built on the data. That's where it starts. And so, you know, weization of the data is very important. Again, AI is all built on the data.
That's where it starts.
And so, you know, we've heard the saying garbage in, garbage out.
You know, if you're using the calculator and you type in the wrong numbers, then yeah,
you're going to get the wrong output.
The same thing works with these AI models.
And so having extra care in how data is annotated, maybe even thinking about how you have diversity in the people
annotating your data or make sure that they're properly trained and go through some link to make
sure that there's some consensus amongst those people that are annotating the data and making
sure that there are no incorrect labels, annotated labels in that data. Because I remember back in 2019, we were looking at
lots of data sets that were operating data sets, like on the shelf data sets that had already been
in the market. And the annotations were just flat out wrong. They were, you know, missing data.
It was incorrect data. And so, you know, if you think about all the companies that were
using these data sets from the beginning, you know, that's terrible in terms of whatever
other applications they were building and you further cascade on more and more issues
to this whole system. And so that, it just shows you how important it is just in the data alone
to make sure that, you know, you're very, taking very much
care in the biases in the data, in the annotating the data. And one more thing I'll say about the
data, a lot of times you would think that, okay, if I have an even amount of data for each group
or each subgroup, then my algorithm should be fine, right? Well, that's not always the case.
And so algorithms, machine learning models are all based on statistics right? Well, that's not always the case. And so algorithms,
machine learning models are all based on statistics. You know, there's a percentage
that you're right, there's a percentage that is wrong. There's even a confidence score as to how
much, how confident are we that this result is the right result or the wrong result. And so with that,
it depends. Sometimes you need a little bit more training
data for one of those groups or subgroups than you would for other groups. Key example,
in some data sets using text, for example, we did this study with a group of people in a workshop
and they were consistently getting the wrong, basically the model was mixing up the number
seven and the number one in these handwritten digits.
And so we had to include a little bit more training data for the number seven and the
number one, but all the other digits were fine.
We didn't include any extra data for those and the performance did not decrease in those
areas.
And so it just goes to show you the difference between what we call equity in data instead of equality. And sometimes some groups need more attention or
more resources than other groups in order to achieve the same tasks.
ACM ByteCast is available on Apple Podcasts, Google Podcasts, Podbean, Spotify, Stitcher,
and TuneIn. If you're enjoying this episode,
please subscribe and leave us a review on your favorite platform.
That's such a great point you make. And Ashley, I definitely want to talk about equity and
equality. And thank you so much for sort of really talking in detail about how to think about data and
how to think about eliminating bias in data.
I have sort of a shootout question.
Say a company that is trying to put out a product, there is always a pressure to sort
of get, you know, time to market pressure, right?
You want to get your product out as quickly as possible.
Is it easy enough to determine inaccuracies in your data as well as inaccuracies in how your product is applied such that it prevents people from sort of rushing through this process?
Because like you were talking about it, you want to make sure that you have diversity in sort of the people that are annotating the data.
And to be able to get that group of people and to be able to sort of do this in a patient manner to
actually get it right takes time. So I'm just wondering, is there a way, is there inherent
checks and balances to make sure that we aren't rushing through this?
Yeah. So the whole governance piece of that is very relevant because you can have different
people, different components. So there are many stages of the
AI pipeline, even, or even just product development in general. And so, you know,
there's a conception stage in AI, there's the data, cleaning your data, there's training the data,
there's testing the model, there's more iterations, and it all keeps going over and over and over.
And so if you have at one end of that pipeline, someone who's not doing what they're supposed to be doing, they're not adhering to the AI lifecycle, which you say is very much so an iterative lifecycle, as opposed to traditional product development, where usually, you know, you have a very straightforward software requirements.
You deploy that product, you test it beforehand, and it is fine.
As long as the environment doesn't change, or even if the environment doesn't change, the system still performs the way it's supposed to.
Now, in contrast, an AI model can vary so we can have another version of the model we deploy and the results may be different for certain groups, whereas you have certain groups that may have performed very well in that model. Now you may have some groups that perform well, but other groups that perform well before now don't perform well.
You also have situations where you deploy a model and it is very susceptible to the changes in this environment.
And we call that model drift, where your model now has unwanted biases, whereas when you first deployed it, it didn't.
And so AI system is very different. And nowadays we have systems that are multimodal. So you have
some that have agents, you have, especially with the generative AI world that we're in now,
we have outputs from these large language models that we've seen, Cloud, OpenAI, and then you add
on filters on top of that, guardrails on top of that,
other models and agents and bots. They help you connect all these models together in this hybrid
system. And every stage of that system, you have best practices that need to be followed.
And every system, essentially every subsystem works together to make the whole better in general.
So there's a little bit more involved, and which is why with AI, these governance practices are very important.
Understood. Thank you, Nathalie.
That's very reassuring to think that people are thinking about governance and thinking about how to sort of, you know, enforce this across the industry as well.
I want to pivot a little bit to talk about your
sort of entrepreneurship journey. And Ashley, I know you have experience working in a startup and
as their CTO very early in your career. In fact, from what I understand, you were still a student
when you joined PartPick. I was wondering, what did PartPick do? What was the problem you were trying to solve? Yeah, so Part Pick was an idea that was started by CEO Jewel Burks, now Jewel Burks Solomon.
She had the idea of working at a parts company in customer service where people would call in asking for, hey, I need this part, I need that part.
And this was a large company.
And so they would often, you know,
try to describe the part, you know, and how do you describe something that you really don't know
what it is, right? So they'll say, hey, I need this thingamabob. It's, you know, about the size
of my hand. It's black. So she said, hey, can you just take a picture, you know, save us both some
time. Just send me a picture of the part and then I can send you the correct part.
So then she had the idea, what if we cut out that whole interaction and just make it where they upload a picture to an app and the app uses some sort of process to recognize the part. And then
we send them a link to where they can purchase and order the part. And so she did more research.
She found out that parts companies have,
you know, millions of dollars a year that they spend on, you know, sending the wrong part and
having to send the correct part. So there was an actual business case to this idea as well
that she actually experienced. And so she needed some assistant building, building this product.
And so we were connected via a mutual friend and I was able to help build
the first prototype and train some of the first AI models that would recognize these parts. And
we would say, for example, hey, take a picture of the part. The app will walk you through taking a
picture. We would ask you to put a size reference next to the part, so like a penny. And we were able to say, hey, this is a hex bolt. It's two inches long. It's half an inch base diameter. And it's stainless
steel number two. And this is where you can buy this part. And so that was what PartPick, the app,
actually did. And you're right. I was CTO. It was actually really just helping out at first
because I was finishing up at Georgia Tech. And she hadn't raised a lot of money for the app and for the business. And so I actually graduated,
ended up working at a consultant firm in New York City. And I remember getting a call like
nine months later after we built that first prototype. And she was like, hey, we raised
the money and we need you to be our CTO. And I was like, oh.
And fortunately for me, I was in New York City at the time and it was like 20 inches of snow on the ground.
I was like, OK, I'm happy to come back to Atlanta and be back in the South and where there's warm weather and close to my family in Mississippi. And so I was very motivated to take that role, even though I took a pay cut.
I didn't know anything about startups at the time.
Interesting enough, in the era that I came through grad school, there wasn't a lot of talk about entrepreneurship and startups.
You know, getting my degree in computer engineering, even my Ph.D., I think maybe the last semester there may have been, you know, different programs that we could enter into, or maybe there were and I just didn't know about it.
But this is a whole world that I had to learn about, this startup world.
And how do you raise capital and how do you make a business case for your technology?
Because building great technology is cool, right? As technologists, that's what we want to do.
But you have to be able to market that product because ideas are cheap, but it also has
to be a feasible product. And so we were able to all work together as a team, create this company
that fortunately we end up selling that company to Amazon. Ironically enough, I was presenting
on stage as CTO of Park Peak after I came back to Atlanta. And I remember this was in Boston of May of 2016. And I was
talking about the technology. I came off the stage. I really wasn't even supposed to be at
that conference. Jewel was supposed to be there, but she had another engagement. So I decided to
step up and take that on. And I remember presenting and I came off stage and that's
where the Amazon guy, he was in the audience and he
connected with me and we ended up selling the company to Amazon three months later. And I was
like, wow, who would have thought that this journey would lead here? And then majority of the team
joined Amazon. So I came into Amazon in a very non-traditional way. I didn't even know I would
still be here by now, but I am there almost eight years later. And so it's been a very non-traditional way. I didn't even know I would still be here by now,
but I am there almost eight years later. And so it's been a very interesting journey.
What a phenomenal story, Nashlee. I mean, I love the fact that you actually identified a problem,
you know, just working a very different job, working in customer support in a parts company
to actually be able to have that vision to say, okay, there is a technology that can solve this problem for me in a much better way.
I also love the fact that you joined the company while you were still a student and took on a
leadership role and a senior technical leadership role as practically your first job.
You know, are there some inherent sort of traits or like, you know, lessons you learned because of that experience that have stayed with you and served you well?
Absolutely. That was my first time being in a management role.
And I learned that people management, actually, you have to be a little bit of many things.
You have to be a little bit of a mother, a little bit of a counselor, a little bit of a motivator. You got
to get people at their best so that they can be productive because that's what makes this whole
company work is that you have people doing what everybody does their part and it contributes to
the whole. And so I don't think, I know for a fact I wouldn't have gotten that opportunity
in any corporate America company that I may have been working at.
I definitely didn't have that opportunity where I was working at the time.
And so actually it was the opposite.
It was very much so your typical corporate America, being the only one, being the woman that wasn't heard oftentimes in some of the meetings and feeling like, you know,
you're always behind. And it was just the total opposite. Like I often compare it to the Marvel
movie, the Black Panther. And in that movie, Shuri is like the top tech lead person in there. And
it's actually a black woman. And so I often compare that to me being Shuri at Park Peak.
And it was an awesome feeling. Imagine the feeling of no glass ceiling. You literally
do what is needed. You have none of the extra anxieties or imposter syndromes, and you are
able to just be productive. And it was an amazing feeling that led to me being,
you know, manager even at Amazon once I got, once we got acquired and, you know, managing teams for
years to come. I'm an actual individual contributor now, but I wouldn't have known I had that skill
set if that opportunity had not presented itself. What a fascinating story, Nashli. I completely understand
what you're talking about in terms of the level of confidence that you could gain from that
experience. And when you talk about no glass ceiling, it's amazing because once you've played
the role, and I feel that's true for many individuals, mostly women, is once you've played
the role, you realize that, oh, that was not so hard and I can do that. But until many, many individuals, mostly women, is once you've played the role, you realize
that, oh, that was not so hard. And I can do that. But until then, you're always sort of doubting
yourself and always thinking you need to do two more things before you can sort of aim for that
bigger role. So it's really, really very heartening to hear what you're talking about. And you're a
serial entrepreneur, because now you're a founder of Beanpath, a nonprofit organization.
You are on a mission to change the narrative of what is possible from a city in the deep
south of the US.
So I'd love to hear more about Beanpath and your vision for Jackson, Mississippi.
Yes, yes.
So obviously having all of this success, especially early in my career,
having joined Amazon, actually when I turned 30 and that's when I realized, okay,
have I made it? Is this the promised land? Is this where I want to be? Is this where I want to end up?
And so just trying to realize, I actually felt like this wasn't the end. It was so much more that I wanted to do that I could do,
especially now with a lot of more financial resources. And so I decided, actually, I remember
talking to one of my colleagues at work and he was actually an Indian gentleman and he lived in
Silicon Valley, San Francisco Bay Area.
And he was on one of our teams.
And we were just talking at lunch.
And he was saying, yeah, I own two shopping malls in India.
And I was like, wait, what?
Wait, you can still do that and still do what you're doing?
And he was like, yeah, people do it.
And that was really the first time that I knew that.
Again, we're often taught,
you go to school, you get a job and you retire and that's it. I didn't know that there was this whole entrepreneur world. Like I mentioned, I didn't even know anything about startups,
even though I was working at a startup, but I definitely didn't know anything about this world
of working in corporate and still being an entrepreneur. And there are so many people that do
it. I've come across people that own gyms and smoothie shops, and they're able to create
businesses outside of their work because, as you know, working in the tech field, it can be very
lucrative. And so there's a lot of things you can invest in with your money and with your capital,
including real estate. And so I thought along with, I want to do something more for my hometown and my community
that I grew up in and how do I pay it forward? And so I went to Jackson and I remember looking
to start this organization that would help people move more progressively in terms of technology and
provide the community with these tools and access to this
expertise that they needed in order to strengthen this huge tech gap that we have, not just in
Jackson, Mississippi, but a lot of places across the Southeast. And I think there's, you know,
more disparities even in Mississippi, given that majority of people in Mississippi actually live
below the poverty line. So there's a huge lack of access to internet even. There's a lack of access to just having and being in this conversation
about the tech ecosystem and what's possible. I believe tech can be a game changer for someone.
It can really change your lifestyle. It can change your trajectory. It can change generational wealth.
And so I wanted to impart that back into the community. So I started the Bean Path. Bean Path comes from being a, most people know,
my coders out there know that bean was actually a computer programming term in Java language,
which is like a smaller function that you can add on and extend as much as you need to,
to make it do more like the building blocks. And so most people though,
think of bean as a seed. You put it in the ground, it grows. And I'm a gardener as well. So
I have some beans out there, you know, every now and then, and they grow into this vine that just
kind of takes over. And so we were trying to help people find their pathway with technology.
And so that's where the name Bean Path came from. We started in the local libraries
in Jackson, Mississippi.
And before we knew it, people were lined up at the library doors coming to get this tech help. We were setting up a tech help shop. There would be everyone from people's grandmas to, you know, kids who, you know, just wanted to see what an engineer looked like.
What does an engineer do? What does it mean to major in
computer science? And we provided that access right there for them in the communities.
And so I then proceeded to purchase land in Jackson, Mississippi, actually downtown,
so we could scale what we're doing. One thing I learned at Amazon is how to scale.
And so I also knew that the wealthy own real estate. So not that I thought
I was wealthy or anything, but I just thought it was a good idea. So I decided to purchase property.
We now have 22 acres, 22 acres and eight buildings, downtown Jackson. And this is in an area that is
very much so in need of revitalization. And so to date, we have two buildings that we've renovated.
One is the Bean Path headquarters and makerspace. It's the first makerspace in downtown Jackson. We do a lot of activities there, everything from AI and robotics to 3D things that they aren't used to and being able to merge the two in steam.
And then also our second building is our big event venue, which is actually called the Bean Barn.
It's funny because when I went to purchase the Bean Barn, they already called it the Bean Barn.
They actually used to process soybean and cotton in this facility. And so
it's a huge 17,000 square foot barn in the middle of the city. And so we purchased the bean barn
and now we have other activities there, like we're getting ready to start food trucks and yoga and
skating lessons and things like that. So really bringing the community out and creating a hub for
the community and indirectly before people know for the community. And indirectly,
before people know it, they're learning about all these emerging technologies and helping to move
the whole community forward. You're such an inspiration,
Natalie. Just listening to you talk, I can feel the passion in your voice. And I love that you're
applying the skills that you're learning in your day job and really inspiring an entire
community of people.
And I strongly believe that one person dreaming big and making things happen can uplift their
entire community.
And that's exactly what you're doing.
It's fascinating.
And I wish you all the very best for all that you envision to do with Beanpath.
This has been such an amazing conversation.
For our final bite,
Nashlee, I'd love to hear from you. What are you most excited about in the next five years,
whether that's with technology, AI, or with what you want to do with BeanPath?
Yeah, so I'm really excited about the future of AI and the fact that I can now have this AI
conversation with a lot of people because we tried to have this conversation 10 years ago and people didn't know what we're talking about.
But now people know what we're talking about.
So we can now have this conversation and hopefully that levels the playing field a little bit more.
Hopefully people are a little bit less intimidated with AI and the technology.
And along with learning the limitations, hopefully they also learn how to use it to their benefit in their day-to-day lives. I'm also hoping that this opens up the world for more
innovation, especially in areas that didn't have access to this technology before. I'm very excited
that more companies are thinking about AI in a more responsible way than before. And that our
government is also concerned about how do we continue to enforce people using
it in a responsible way. So we will continue to move the charge there. And then lastly,
with the bean path, I'm excited for growth. The bean path, the vine is going to keep growing.
It's going to keep dropping seeds in other locations and maybe even scaling across other
cities like Jackson that could use something like
Beanpath to help embed these tools and knowledge and networks in the community to bridge the
tech gap.
So as long as I'm here, I'm going to keep doing my best and what I can do to help move
things forward and partner with anyone who's interested and excited about the future.
Fantastic.
Thank you for the thoughtful and inspirational work that
you do. And also thank you for taking the time to speak with us at ACM ByteCast.
Thank you so much for having me.
ACM ByteCast is a production of the Association for Computing Machinery's Practitioners Board.
To learn more about ACM and its activities, visit acm.org. For more information about this and other episodes,
please visit our website at learning.acm.org. That's learning.acm.org.