ACM ByteCast - Noriko Arai - Episode 46
Episode Date: November 15, 2023In this episode of ACM ByteCast, our special guest host Scott Hanselman (of The Hanselminutes Podcast) welcomes Noriko Arai, a professor in the Information and Society Research Division of the Nationa...l Institute of Informatics in Tokyo, Japan. She is a researcher in mathematical logic and artificial intelligence and is known for her work on a project to develop robots that can pass the entrance examinations for the University of Tokyo. She is also the founder of Researchmap, the largest social network for researchers in Japan. Her research interests span various disciplines, including mathematical logic, artificial intelligence, cognitive science, math education, computer-supported collaborative learning, and the science of science policy (SoSP). She earned a law degree from Hitotsubashi University, a mathematics degree from the University of Illinois at Urbana-Champaign, and her doctorate from the Tokyo Institute of Technology. In the interview, Noriko and Scott discuss the challenge of being a creative in the modern academic environment, where publishing is paramount, and how her multidisciplinary background, which spans law, economics, and mathematics, has been an asset in her scientific research. She also mentions her 2010 book, How Computers Can Take Over Our Jobs, and how that led to her work on the Todai Robot Project. Noriko offers her thoughts on the pros and cons of ChatGPT and similar technologies for society. She also mentions her mentors and heroes who have inspired her and shares some of the challenges faced by female researchers in Japan.
Transcript
Discussion (0)
This is ACM ByteCast, a podcast series from the Association for Computing Machinery,
the world's largest education and scientific computing society.
We talk to researchers, practitioners, and innovators
who are at the intersection of computing research and practice.
They share their experiences, the lessons they've learned,
and their own visions for the future of computing.
I'm your host today, Scott Hanselman.
Hi, I'm Scott Hanselman.
This is another episode of Hansel Minutes in association with the ACM ByteCast.
Today I'm talking to Dr. Noriko Arai.
She earned a law degree from Hitosuwashi University and then a mathematics degree from the University
of Illinois at Urbana-Champaign and has a doctorate from the Tokyo Institute of Technology.
We're pleased to be chatting with her from Japan today. How are you, Dr. Arai? I'm fine. Hello, Scott.
It's a very honor to be interviewed by Siam. I'm very glad. We're thrilled to have you. You're
working on some very amazing things right now. And you've also done some cool things in the past.
I know that one of them was very popular and ended up in the news.
So I want to lead with that.
It was the Todai Robot Project.
You created a robot that can actually get into the University of Tokyo.
How would you have such an idea and come up with this idea?
It's a bit long story.
Is that okay?
Of course.
That's why we're doing the podcast.
Okay. So before starting the Todai Robert project, I wrote a book in 2010, I think.
The title, How Computers Can Take Our Jobs.
This book was born out of my academic journey.
You know, that Daphne initially studied law and economics in my
undergraduate and later delved into mathematical logic. In this book, I made two predictions.
First, by 2030, half of the jobs currently done by white-collar workers will be replaced by computers. And the second was, I predicted that the next AI
boom would be soon been upon us, but it wouldn't be sparkled by academia. But instead, it would be
driven by the tech giants. These two predictions were also made in the highly, you know, read in the globally best-selling Race Against Machines,
but my book preceded it by two years.
By writing that book,
I felt that I fulfilled my social responsibility as a researcher
in mathematical logic, conceptualized both computers and AI.
But the book didn't sell sell as well as I hoped. Most Japanese did not take
seriously the idea that AI would take our jobs. I became so concerned with the reaction because
I was so certain and I was so confident about my predictions. I wondered how I could make them aware of that issue.
So it was just before Christmas of 2010.
One day, as an elevator door at my workplace opened, a young AI researcher stood before me,
and I just blotted out and asked,
you know, do you think AI could pass the University of Tokyo entrance exam by 2020?
And he replied, I wouldn't be surprised if it did.
That's how we started this project.
If he said no, I don't think so, then probably I gave it up.
But he didn't say clear no.
So that was how I started the project.
That's so interesting also that you're not just doing it because it can be done.
You're bridging policy and education and computer
science. Sometimes people who are researchers decide to do something just because. I want to
see if it's possible. But you also wanted to warn the people and let the people know so that they
can prepare. This means that your research is spanning across discipline.
You're bridging education, you're bridging policy, you're bridging computer science.
But from the outside looking in, I feel maybe that academia is siloed and prevents or discourages that kind of creativity.
Do you think that is true?
Yes, that's true. I think so, yes.
Well, it is my nature, my background in the undergraduate is law and economics.
So I was so worried about the job market always in politics.
And after that, I, you know, delved into mathematics.
So I, but not just mathematics, but the foundation of mathematical logic. And Turing, you know, von Neumann, and all those legends in mathematical logic.
So I felt like a member of that field.
So that made me responsible to make people understand what AI is
and what kind of impact it would have.
I still feel like it was like a social responsibility
as a member of that field.
Do you feel like every researcher has that sense of social responsibility
or do you think that that's something we should make more researchers think about?
Probably. I was just fortunate to secure my professor's job Or do you think that that's something we should make more researchers think about? So had I been five years younger, I'm not sure if I could have even landed tenured position in academia or if I could have had children even. try to solve the problems for the sake of problems
or technology for the sake of technologies.
I don't break them.
It's just the academia is so competitive
and publish or perish mindset is overwhelmed.
Even that phrase, publish or perish, I don't like that.
It doesn't feel good to say that out loud.
I don't either. If I was five years or ten years younger, I might have been overwhelmed by that mindset, and I didn't learn like me today. I don't know. You are multidisciplinary in that you have a deep
background in mathematics, a deep background in law, and you're focusing on your projects,
though, are so practical and so pragmatic. You're trying to help others directly with
things like ResearchMap and EduMap in your projects. Is it hard to be a researcher and a practitioner?
Because I feel like some academics are a little out of touch, but you're grounded in humanity.
True.
It's just, I mean, there are researchers, you know, intellectually curious and innately inclined to tackle challenges, you know. The true significance should not depend on whether the issue will be published or on the impact factor, you know.
So what really matters is usefulness.
I would say, you know, not the pragmatic or something like that, but the usefulness, the simple usefulness to the society, whether that is for society today or in the
near future or the distant future, is only a matter of timing.
As a mathematician, I was working probably for the society in the future, in the long
distance future. But when I am in software, probably I work for the future,
near future or today's future. When I talk to people who are in your position,
venerable researchers, I get overwhelmed at the amount of work that you and your team have
accomplished and you've accomplished as yourself. You're a researcher, you're a professor, you're a
director, you're a founder of various initiatives. Does the work-life balance become a challenge? How
do you balance these different roles that you have to fill? Everything is like, you know, a hobby for
me. I love that though, because you're excited about so many things so it is a hobby like life is a hobby i love to cook i love to sew i love to knit and i love to work so it's simple you know
i cannot come up with any answer then because i i'm interested i'm not that hard worker, though, because I sleep like eight hours every day, and I cook three times a day.
So I'm not that hard worker.
Well, it sounds like you've found balance, though.
Like you're intentional, and you're deliberate, and you focus on on balance and you focus on life and humanity.
And that informs your work.
I am supposed to spend more time in writing papers, research papers.
I hate writing research papers. because, you know, when I am done with, when I have already crystallized the idea I had,
I, you know, just, it's there.
So I don't need to feel like, you know, I have to explain people how it is, you know,
because it's there and it's working.
If you can come to the website, it's there.
I don't feel like, you know, I have to write the papers about it.
But, you know, that's the most tedious thing I do.
So I have to do what I do, but I hate writing the paper after I crystallize my research.
Well, I think it's one of those things where someone has, you have to get the public or you have to get your co-worker from point A to point D, E, or F, and you've made the leap.
And they're asking you to walk them all the way to the end of the proof so that they understand how to get there and you're like ah don't you see you can jump over
here with me yeah and the way that you couldn't get the public to buy the book so you created
the project and then you got the robot to join the university and then pass the test and they go
why did no one tell us and you're like like, but I told you earlier, did you read
the book? Well, actually, I have to make it clear. Robert didn't catch the University of Tokyo.
It passed 70% of the universities in Japan. In Japan, the entrance examination is very competitive, you know, but not TODA.
I'm sorry.
But CHAT-GPT teamed up with TODA-ROBOT.
Maybe it is possible.
The difference between CHAT-GPT and TODA-ROBOT, Robert, that we added our AI for English and the social sciences that are very similar
to ChatGPT, and ChatGPT has more data set, so it must be much better.
But for mathematics, it's something different.
It needs clear reasoning. So for mathematics, we made the GoPi, the
good old fashioned AI, from scratch. It took like six years to make the dictionaries of
it. But, do you want to say it? It worked quite well. Its performance was incredible and unimaginable in the last
century, thanks
for the computers, you know, nowadays
computers. It was a really exciting
movement, but
the truth that
our machine produced
was
not understandable
for humans. The machines
think in their way,
but it outputs the correct answer.
So it works very differently from Chachi BT.
So if these two teamed up,
probably they can pass the University of Tokyo entrance exam now.
ACM ByteCast is available on Apple Podcasts, Google Podcasts, Podbean, Spotify, Stitcher,
and TuneIn. If you're enjoying this episode, please do subscribe and leave us a review
on your favorite platform. If these team up and AIs are going to team up and very large language models are going
to become better and better, are you optimistic or pessimistic?
Is this a good thing for society or a bad thing?
Where do you fall on that?
Well, that's a very hard question. But from what I understand, you know, these technologies will benefit like top, I would say top 5 or 10% of intelligent, very intelligent people, but for those who are very good at reading, writing, and understanding
media literacy, and already an expert in some area, and if he or she wants to use the LM for his or her experience area, then it will be great.
Because I myself use ChatGPT every day basis, and it is very helpful for me.
Not only ChatGPT, but Grammar and the DPL and other things. But for those who doesn't read well or
write well or who doesn't have any expertise, then the chat GPT doesn't know what is right and wrong.
It is not trained in that way. It is trained to make smooth sentences without knowing what is right
and wrong. So probably those people who want to use the ChatGPT made wrong sentences, mistakes
here and there. But if he or she does not have enough media literacy or literacy itself, then he would say, oh, that's great.
I can use this whole thing.
That would be very dangerous or that would be risky or costly for the society. So it's kind of mixture, you know,
the probably top 5%, you know, that utilize,
it has the skill or, you know,
talent to utilize such GPT or other LLMs, you know,
for them, it's beneficial.
And for other people, it's not beneficial. So it's really hard to anticipate
if it is a bad thing for the macro society or not. When I've talked to young people about it,
as a person of a certain age, they say, well, you guys told us back in the 80s that we shouldn't
use calculators on our math tests because the calculator will make us dumb. And you're just
doing the same thing now to keep us from using this tool. But you still need to understand math
before you get the calculator. The calculator doesn't just do it all for you. So I'm hearing
you say that there's this base literacy that is so crucial of
problem solving and understanding. And then you mentioned media literacy. Otherwise, whether you
Google for something or you ask ChatGPT, your own biases may be reflected right back at you,
and you're going to get an answer that's not correct, you know, or appropriate. Because the presence of Google doesn't make everybody happy, you know.
Everybody has a chance to Google and search to any digitalized knowledge,
but it makes the society so, how do you say, unbalanced.
It is better if you, you know, look back in the 70s or 80s,
you know, when everybody reads the newspapers.
During that time, probably, you know, people can communicate better, at least.
Now, it is so hard for people to communicate with each other because they are
sectioned, and rich people get richer, and poor people get poorer. So, you know, we cannot
explain that if we can access to any digitalized, you know, knowledge with Google, with the help of Google and computers, it doesn't help people happier as a macro society.
I'm curious, when I'm hearing your perspective on the world and how you think about these things,
are there any particular mentors or colleagues or people in your life who have inspired you to think about your career or your research in this way? There are many heroes, like, you know, in my fold, you know, Turing and Gero and Von Neumester.
So there are many heroes.
Steve Cook.
I always keep eyes on Tony M. Pitassi.
She's an ACM fellow, and among my contemporaries, I always keep my own Tony and Peter C. Well,
and she is the first woman who chaired STOCK, the Symposium Theory of Computing. That was great. And I met her, I think, in Toronto, the Fields Institute, in 1997, I think.
And she's very energetic and honest and highly talented.
And she constantly traveled to collaborate with renowned researchers worldwide.
And it was Christmas in 2000.
So, yeah, I was spending Christmas with her.
And she and me were invited by the Complex Day Seminar held at the Princeton Institute for Advanced Studies, a proof complexity
seminar.
And spending time with her for two weeks, I realized I couldn't live like her and decided
to seek for another path. One thing was I'm not as healthy as her.
I mean, I'm weak.
How to say?
I think she has a lot of energy.
Tonyann Patasi, for the folks who may not be familiar,
Tonyann Patasi is a specialist in computational complexity theory,
and she's at Columbia, and she was named an ACM
fellow in 2018. And she is a very energetic person. And also, I was based in Japan, and that's
the Far East, of course. So I thought, oh, I cannot do like Tony. So I decided to seek for another, a different path.
So 13 years later, she grabbed a paper from the New York Times,
covered the story of the Todai Robert Project,
and brought it to the University of Toronto's computer science department
to say, hey, our Nor never recommend it to the New Times.
So happy hearing that story.
Do you know, she's like my compass.
As someone I refer to often, you know,
checking if I'm doing the right thing or, you know, on the right path.
Yeah, so I'm not, you know, competing with her.
She's kind of like,
yeah,
she's like a compass.
And so,
so I just,
you know,
check myself if I'm doing the right thing.
Can I,
you know,
proud of myself to Tony?
And that's what I,
that's how I do.
Yes.
I love that you use that word compass.
That's very well said to have a friend and a colleague who is a compass, they're an academic compass, they're your peer, and they're also your mentor, and they're your friend and your helper. And they're letting you know if you're so academically, you know, achieved in ACM fellow.
That's not what I am seeking for,
but still she is a compass for me.
That's lovely.
I'm sure that she'll be happy to know that
and be reminded of that as your friendship.
I'm curious what skills are,
do you think are essential for a researcher
in the AI space or a practitioner? Certainly a strong moral compass, a you think, are essential for a researcher in the AI space or a practitioner?
Certainly a strong moral compass, a societal focus.
But what are some skills or qualities that someone could have in this space to be good at AI research?
Be honest.
And that's all.
Because those people who are doing AI knows what they are doing.
They're doing using probabilities and statistics, right?
And they don't have any data set telling you what is right or wrong.
And relying on the AI and big data is that, you know, giving up the truth. The AI can sometimes tell you
and show you very good scenarios
or maybe it can write a research
paper for you. But at the same time,
the researchers in AI must be honest.
They are using technology.
They have to be always aware, limit the limitations of AI in mind, and be honest to the society.
That's the most important thing.
Sometimes when I try to explain AI to, or very large language models, to people who are not in the field, I say that it's like a sock puppet. You put the sock and you say, like, hello, hello. But it's your arm, it's your hand, you're talking to it, and it's going to come back and reflect to you. And if you're dishonest with the model, you will receive dishonesty back. So I really like your focus on honesty and being ethical and real to these AIs. Otherwise,
we're definitely in trouble. So do you see that honesty in the intersection of science and policy
and politics? Are we living in a time where everyone can be honest and science can be
apolitical? It really depends on which countries you are talking about. Probably the U.S., the situation is apolitical,
but like in Far East,
like in Japan, in Korea,
and probably in China,
too much politics in academia, I think.
And it's not sometimes,
it's always wrong.
That is a problem, I think.
It is really weird situation,
you know, probably back in 60s and 70s.
By the way,
I was born on the October 22nd in 62.
So that was the day that Kennedy
was, you know, speaking to the public
that maybe the nuclear war would occur.
In October 22, 1962,
that was when Kennedy addressed the buildup of arms
happening in Cuba in the beginning of the Cuban Missile Crisis.
You were born there at that time, in that moment.
Yeah, that moment.
So probably I would have died you know right after i was born
but luckily you know i'm here the year 60 but um at the time probably the science was very political
in many ways in many many countries like in Union and the U.S. and China.
But right now, it's in a different way.
You know, it's apolitical in some countries and too much political in other countries, I would say.
In our remaining time, I did want to ask a bit of a pointed question.
I'm curious, have you faced any challenges, specifically as a female researcher in the field of IT, specifically in Japan?
Okay, Japan ranks 125th in the World Gender Gap Index. So we are behind countries like Angola and Myanmar.
So it means that there's no women in Japan cruising without challenges.
I cannot remember any year without sexual or power harassment when I was young.
And after I was promoted to a professor, it changed to pointless criticisms.
The Todai Robert Project was often criticized in Japan for being selfish.
And the selfishness is a very, very bad thing in Japan. Okay. And however, it is natural for
researchers to choose their team selfishly. Is that true? So that's pointless. But, you know,
I'm assuming I criticized that I'm being selfish.
There's not much I can say there other than I respect your persistence and that you are still
here. And we are happy that you are still here sharing with us. And I'm very honored to have you
join us on this podcast. Thank you very much.
We have been chatting with Dr. Noriko Arai,
and this has been an episode of Hansel Minutes in association with the ACM
ByteCast, and we'll see you again next week. ACM ByteCast is a production of the Association
for Computing Machinery's Practitioner Board. To learn more about ACM and its activities,
visit acm.org. For more information about this and other episodes, please do visit our website at learning.acm.org slash bytecast.
That's B-Y-T-E-C-A-S-T, learning.acm.org slash bytecast.