TED Talks Daily - Your right to repair AI systems | Rumman Chowdhury
Episode Date: June 4, 2024For AI to achieve its full potential, non-experts need to be let into the development process, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She tells the story of farmers ...fighting for the right to repair their own AI-powered tractors (which some manufacturers actually made illegal), proposing everyone should have the ability to report issues, patch updates or even retrain AI technologies for their specific uses.
Transcript
Discussion (0)
TED Audio Collective.
You're listening to TED Talks Daily,
where we bring you new ideas to spark your curiosity every day.
I'm your host, Elise Hu.
Today's talk brings together the story of modern farming
and how repairing John Deere tractors
led to a fundamental insight about AI.
Humane Intelligence CEO Ruman Chowdhury makes the case for why we all need to contribute to AI
after a short break from our sponsors.
Support for this show comes from Airbnb.
If you know me, you know I love staying in Airbnbs when I travel.
They make my family feel most at home when we're away from home. As we settled down at our Airbnb
during a recent vacation to Palm Springs, I pictured my own home sitting empty. Wouldn't
it be smart and better put to use welcoming a family like mine by hosting it on Airbnb?
It feels like the practical thing to do, and with the extra income, I could save up for
renovations to make the space even more inviting for ourselves and for future guests. Your home
might be worth more than you think. Find out how much at Airbnb.ca slash host.
AI keeping you up at night? Wondering what it means for your business?
Don't miss the latest season of Disruptors,
the podcast that takes a closer look at the innovations reshaping our economy.
Join RBC's John Stackhouse and Sonia Sinek from Creative Destruction Lab
as they ask bold questions like,
why is Canada lagging in AI adoption and how to catch up?
Don't get left behind.
Listen to Disruptors, the innovation era, and stay ahead of the game in this fast-changing world. I want to tell you about a podcast I love called Search Engine, hosted by PJ Vogt.
Each week, he and his team answer these perfect questions. The kind
of questions that when you ask them at a dinner party, completely derail conversation questions
about business tech and society. Like is everyone pretending to understand inflation? Why don't we
have flying cars yet? And what does it feel like to believe in God? If you find this world
bewildering, but also sometimes enjoy being bewildered by it,
check out Search Engine with PJ Vogt, available now wherever you get your podcasts.
And now, our TED Talk of the day.
I want to tell you a story about artificial intelligence and farmers. Now, what a strange combination, right? Two topics could not sound more different from
each other. But did you know that modern farming actually involves a lot of technology? So computer
vision is used to predict crop yields, and artificial intelligence is used to find, identify,
and get rid of insects. Predictive analytics helps figure out extreme weather conditions
like drought or hurricanes.
But this technology is also alienating to farmers,
and this all came to a head in 2017 with the tractor company John Deere
when they introduced smart tractors.
So before then, if a farmer's tractor broke,
they could just repair it themselves or take it to a mechanic.
Well, the company actually made it illegal
for farmers to fix their own equipment.
You had to use a licensed technician,
and farmers would have to wait for weeks
while their crops, rot and pests took over.
So they took matters into their own hands.
Some of them learned to program, and they worked with hackers to create patches to repair their
own systems. In 2022, at one of the largest hacker conferences in the world, DEF CON, a hacker named
Sick Codes and his team showed everybody how to break into a John Deere tractor,
showing that first of all, that the technology was vulnerable, but also that you can and should
own your own equipment. To be clear, this is illegal. But there are people trying to change
that. Now, that movement is called the right to repair. The right to repair goes something like this.
If you own a piece of technology,
it could be a tractor, a smart toothbrush, a washing machine,
you should have the right to repair it if it breaks.
So why am I telling you this story?
The right to repair needs to extend to artificial intelligence.
Now, it seems like every week there is a new and
mind-blowing innovation in AI, but did you know that public confidence is actually declining?
A recent Pew poll showed that more Americans are concerned than they are excited about the
technology. This is echoed throughout the world.
The world risk poll shows that respondents from Central and South America and Africa all said that they felt AI would lead to more harm
than good for their people.
As a social scientist and an AI developer, this frustrates me.
I'm a tech optimist because I truly believe this technology can lead to good.
So what's the disconnect?
Well, I've talked to hundreds of people over the last few years,
architects and scientists, journalists and photographers,
rideshare drivers and doctors,
and they all say the same thing.
People feel like an afterthought. They all know that their data is harvested,
often without their permission, to create these sophisticated systems. They know that these
systems are determining their life opportunities. They also know that nobody ever bothered to ask
them how the system should be built, and they certainly have no idea where to go if something goes wrong.
We may not own AI systems, but they are slowly dominating our lives.
We need a better feedback loop between the people who are making these systems
and the people who are best determined to tell us
how these AI systems should interact in their world.
One step towards this is a process called red teaming.
Now, red teaming is a practice that was started in the military,
and it's used in cybersecurity.
In a traditional red teaming exercise,
external experts are brought in to break into a system,
sort of like what SIT codes did with tractors, but legal.
So red teaming acts as a way of testing your defenses,
and when you can figure out where something will go wrong,
you can figure out how to fix it.
But when AI systems go rogue,
it's more than just a hacker breaking in.
The model could malfunction or misrepresent
reality. So for example, not too long ago, we saw an AI system attempting diversity by showing
historically inaccurate photos. Anybody with a basic understanding of Western history could
have told you that neither the founding fathers nor Nazi-era soldiers would have been Black. In that case, who qualifies as an
expert? You. I'm working with thousands of people all around the world on large and small red teaming
exercises, and through them, we have found and fixed mistakes in AI models. We also work with
some of the biggest tech companies in the world, OpenAI, Meta,
Anthropic, Google. And through this, we've made models work better for more people.
Here's a bit of what we've learned. We partnered with the Royal Society in London to do a scientific
mis- and disinformation event with disease scientists. What these scientists found is that AI models actually
had a lot of protections against COVID misinformation, but for other diseases like
measles, mumps, and the flu, the same protections didn't apply. We reported these changes, they're
fixed, and now we are all better protected against scientific mis- and disinformation.
We did a really similar exercise with architects at Autodesk University,
and we asked them a simple question.
Will AI put them out of a job?
Or more specifically,
could they imagine a modern AI system
that would be able to design the specs of a modern art museum?
The answer, resoundingly, was no.
Here's why.
Architects do more than just draw buildings.
They have to understand physics and material science.
They have to know building codes,
and they have to do that while making something that evokes emotion.
What the architects wanted was an AI system that interacted with them,
that would give them feedback,
maybe proactively offer design recommendations. And today's AI systems, not quite there yet. But those are
technical problems. People building AI are incredibly smart, and maybe they could solve
all that in a few years. But that wasn't their biggest concern. Their biggest concern was trust.
Now, architects are liable if something goes wrong with their buildings. They could lose
their license. They could be fined. They could even go to prison. And failures can happen in a
million different ways. For example, exit doors that open the wrong way, leading to people being
crushed in an evacuation crisis. Or broken glass raining down onto pedestrians in the street
because the wind blows too hard and shatters windows.
So why would an architect trust an AI system with their job,
with their literal freedom,
if they couldn't go in and fix a mistake if they found it?
So we need to figure out these problems today,
and I'll tell you why.
The next wave of artificial intelligence systems called agentic AI is a true tipping point between
whether or not we retain human agency or whether or not AI systems make our decisions for us.
Imagine an AI agent is kind of like a personal assistant. So, for example, a medical agent might determine
whether or not your family needs doctor's appointments,
it might refill prescription medications,
or in case of an emergency, send medical records to the hospital.
But AI agents can't and won't exist unless we have a true right to repair.
What parent would trust their child's health to an AI system unless you
could run some basic diagnostics? What professional would trust an AI system with job decisions unless
you could retrain it the way you might a junior employee? Now, a right to repair might look
something like this. You could have a diagnostics board where you run basic tests
that you design, and if something's wrong, you could report it to the company and hear back when
it's fixed. Or you could work with third parties like ethical hackers who make patches for systems
like we do today. You can download them and use them to improve your system the way you want it
to be improved. Or you could be like these intrepid farmers and learn to program and
fine-tune your own systems. We won't achieve the promised benefits of artificial intelligence
unless we figure out how to bring people into the development process. I've dedicated my career
to responsible AI, and in that field, we ask the question, what can companies build
to ensure that people trust AI? Now, through these red teaming exercises and by talking to you,
I've come to realize that we've been asking the wrong question all along. What we should have been asking is what tools can we build so people can make AI beneficial
for them? Technologists can't do it alone. We can only do it with you. Thank you.
Can Indigenous ways of knowing help kids cope with online bullying?
At the University of British Columbia,
we believe that they can.
Dr. Johanna Sam and her team are researching
how both indigenous and non-indigenous youth
cope with cyber aggression,
working to bridge the diversity gap
in child psychology research.
At UBC, our researchers are answering
today's most pressing questions.
To learn how we're moving the world forward, visit ubc.ca forward happens here.
Support for this show comes from Airbnb.
If you know me, you know I love staying in Airbnbs when I travel.
They make my family feel most at home when we're away from home.
As we settled down at our Airbnb during a recent vacation to Palm Springs, I pictured my own home sitting empty.
Wouldn't it be smart and better put to use welcoming a family like mine by hosting it on Airbnb?
It feels like the practical thing to do, and with the extra income, I could save up for renovations to make the space even more inviting for ourselves and for future guests.
Your home might be worth more than you think. Find out how much at airbnb.ca slash host.
That was Ruman Chowdhury at TED 2024.
If you're curious about TED's curation,
find out more at TED.com slash curation guidelines.
And that's it for today.
TED Talks Daily is part of the TED Audio Collective.
This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Green,
Autumn Thompson, and Alejandra Salazar. It was mixed by Christopher Fazi-Bogan. Additional
support from Emma Taubner, Daniela Balarezo, and Will Hennessy. I'm Elise Hu. I'll be back
tomorrow with a fresh idea for your feed. Thanks for listening.
Looking for a fun challenge to share with your friends and family?
TED now has games designed to keep your mind sharp while having fun.
Visit TED.com slash games to explore the joy and wonder of TED games.