ACM ByteCast - Michelle Zhou - Episode 28
Episode Date: August 16, 2022In this episode of ACM ByteCast, our new co-host Bruke Kifle, AI Product Manager at Microsoft and member of the ACM Practitioner Board, interviews Michelle Zhou, Co-founder and CEO of Juji, Inc. She i...s an expert in the field of Human-Centered AI, an interdisciplinary area that intersects AI and Human-Computer Interaction (HCI). Zhou has authored more than 100 scientific publications on subjects including conversational AI, personality analytics, and interactive visual analytics of big data. Her work has resulted in a dozen widely used products or solutions and 45 issued patents. Prior to founding Juji, she spent 15 years at IBM Research and the Watson Group, where she managed the research and development of Human-Centered AI technologies and solutions, including IBM RealHunter and Watson Personality Insights. Zhou serves as Editor-in-Chief of ACM Transactions on Interactive Intelligent Systems (TiiS) and an Associate Editor of ACM Transactions on Intelligent Systems and Technology (TIST), and was formerly the Steering Committee Chair for the ACM International Conference Series on Intelligent User Interfaces (IUI). She is an ACM Distinguished Member and Member at Large on the ACM Council. Michelle presents five inflection points that led to her current work, including the impact of two professors in graduate school who helped her find her direction in AI. She explains what no-code AI means, why the ability for users to customize AI without having coding skills is important, and responds to the critics of no-code AI. Bruke and Michelle then delve into the inception of her AI company that develops AI assistants with cognitive intelligence, Juji, and how it is being used as a platform to introduce AI to early education. Finally, Michelle shares thoughts on the future of software and the no-code movement, as well as the future of AI itself.
Transcript
Discussion (0)
This is ACM ByteCast, a podcast series from the Association for Computing Machinery,
the world's largest education and scientific computing society.
We talk to researchers, practitioners, and innovators
who are at the intersection of computing research and practice.
They share their experiences, the lessons they've learned,
and their own visions for the future of computing.
I am your host, Brooke Kifle.
Well, I'm super excited to have an amazing thought leader in the field of cognitive AI,
human-centered AI, Dr. Michelle Zhou. By way of introduction, Dr. Michelle Zhou is a co-founder
and CEO of Jiuji, the maker of the world's only accessible cognitive artificial intelligence assistance, which ultimately
enabled the automation of human engagement tasks empathetically and responsibly, all with no code.
Prior to starting Jiuji, Michelle led and managed the research and development of cutting-edge
interactive intelligent technologies and solutions at IBM Research Watson, including IBM Real Hunter and Watson Personality Insights. Michelle's work has
resulted in a dozen widely used products or solutions, over 100 scientific publications,
and 45 issued patents, all in the interdisciplinary field of human-centered AI that intersects AI and
human-computer interaction.
Currently, Michelle also serves as the editor-in-chief for ACM Transactions on Interactive Intelligent
Systems.
She received a PhD in computer science from Columbia University and is an ACM Distinguished
Scientist.
Dr. Michelle Zhou, welcome to ByteCast.
Thank you, Brooke.
Thank you for having me.
Yeah, well, we're super excited to have you.
I'd love to start off by asking, who is Dr. Michelle Zhou?
Can you maybe tell us a bit more about your upbringing, your education, your career?
And I'd love if you could highlight some inflection points over the course of your life that have
led you to where you are today and what you do.
Thank you for asking. I'd love to. So by race, I'm a human being, and more specifically,
an adult female by training. I'm a computer scientist. I got my PhD in computer science from Columbia University. I have always been working in an area called human-centered AI. It's an
interdisciplinary area that intersects artificial intelligence and human-computer interaction.
By looking back in my career, there are really perhaps about five inflection points or milestones
that led me to where I am today or what I'm doing today. The first inflection
point that would be, first of all, computer science wasn't my first choice to study when I applied for
college. I was born and grew up in China, actually southwest China, and both my parents are physicians so in China you must take a college entrance exam
to get into college right so my first choice was to actually study biology maybe because of my
parents influences so I want to become a biologist to give humans magic powers like making us fly or making us see through the walls.
So this is perhaps really for my love of science fiction,
again, my parents' actually profession as well.
But unfortunately, I wasn't accepted by the biology department I applied for.
So I had to make a last minute switch.
So I just randomly, literally chose computer science, which I literally knew nothing about it.
So I would say that would be my first inflection point, which pushed me to computer science by
accident. So my second inflection point is after I graduated from college in China,
I really wanted to study my graduate degrees in the U.S.
So I came to the U.S., I would say, two professors.
So I went to Michigan State actually for my master's degree.
So two professors there really helped me find the direction I want to go,
which is still my current direction of interest.
So one is Professor Nainang Li.
So he actually gave me the opportunity to work on visualization,
graphical user interfaces for power management.
So because of the project, I really just love it.
I said, oh my God, I can really create those different types of interfaces that enables
humans to better interact with the systems.
A second professor was my AI professor, the professor Carl Page, who taught me literally
artificial intelligence and also allowed me to do two AI projects with him.
And because of these projects, again, I said, oh, I have to do AI.
But many people perhaps actually do not know
the late Dr. Carl Page.
He's the father of Larry Page, one of the co-founders of Google.
Actually, of course, I didn't know then, right?
Because Google didn't happen then.
Yeah. That's definitely my second inflection point. co-founders of Google. Actually, of course, I didn't know then, right? Because Google didn't happen then.
Yeah.
That's definitely my second inflection point.
Because of the two professors,
I really want to do a PhD study that would be in the area
that intersects actually artificial intelligence
and computer graphics.
That's how I found my PhD thesis advisor,
Professor Steve Feiner at Columbia University,
because that's exactly his area.
So then, since then, I have been working
in this human-centered AI area
by marrying artificial intelligence
and human-computer interaction.
So the third inflection point I would say is prior to starting Gigi,
I worked on several human AI-centered systems at IBM and, of course, Columbia University.
So one of the systems really made me to do more in this area,
and especially what we're doing at the Gigi.
So this system is called the System U.
Later on became the IBM Watson Personality Insights service.
So what does this, what this system did was,
basically, you can use the analytics algorithm to automatically
analyze
the person's communication
text to infer this
person's personality traits.
So think about it as if
you have a Twitter tweet,
then actually that's exactly the demo
we did. You can then
use the system view to
ingest the tweets, Twitter data, and automatically
infer this person's personality profile. And because of this work, I found that it just
really opens up tremendous opportunities for computers to gain a deep understanding
of each individual, not just about their behavior. It's
about their unique characteristics. For example, how open-minded they are, how thoughtful they are,
and how they handle life's challenges. So that's one of the reasons I actually left IBM. I wanted
to really further the research in this area, hopefully with more freedom, because if I'm doing a startup outside, right, so I have fewer constraints and more freedom to do more research because there's so many challenges still to be addressed.
I would say that's my really inflection point number three.
Inflection point number four and five is really what I learned in Gigi during the past five and a half years at Gigi.
So because we wanted to really use the built the smart computer systems to better understand each individual.
And we found out that very few people, in fact, have the data, have enough data, sufficient data to be analyzed,
to be used in a very intelligent way.
So then we decided to create a conversational systems, right?
So once we create the conversational systems, then I would say inflection point number five,
nobody would want to use it if they have to painstakingly customize their assistance if they
have to train their assistants from scratch if they have to put in every intent they can think
of that this ai system needs to understand that no way for them to use it. So we have to make it a really no-code and reusable AI to actually,
how do you say this, to promote, to encourage the adoption of our AI assistance. So as you can see,
that's how we get to this point. It is why I'm so passionate about no-code, reusable AI,
and especially also the cognitive AI part of it in terms of really
enabling the AI to know each user and each person in a much deeper way.
Wow.
That's such wonderful stuff to hear.
And I think your career and track record certainly speaks for itself.
And I think you also did just as highlighting the role of mentors and
advisors and, you know, guiding you towards the field that now you have become such a pioneer in.
And I think at the end of the day, you made a good choice by choosing computer science.
So it's all for the best. You touched on, you know, the topic of no code AI as sort of the
last inflection point, which is sort of the key to
democratizing AI, making sure it's easily accessible to all those who maybe don't have
the necessary technical background or programming capabilities. So I think broadly,
there's been a longstanding movement around no-code platforms for app development or website
development. How do you describe no-code reusable AI and how do you see it actually bridging the AI
divide?
Okay.
Thank you for asking that, Brooke.
So this is one of my favorite topics.
So let me actually first break down what is AI, right?
Even though people may have different definitions, I would say one of the common
definitions of AI is machines with certain human skills. So for example, with the human
perceptual skills to see, to understand images and videos, or having a human's natural language
processing skills to be able to interpret sentiment, interpret meanings in natural language
text. So because we want to teach machines human skills, it's really not something trivial.
It requires AI expertise, it requires sophisticated software engineering skills, not to mention the large amounts of training data
or intensive computational resources required.
As you can see, not every organization,
certainly not individuals,
can afford to have all these required elements, right?
So which means that in order to enable reusable AI,
first of all, we have to acquire AI, and acquire ai is non-trivial so what the reusable ai means it is i always use the analogy
everybody is very familiar with um advances space space x has made right so they have to make the
rocket first one making a rocket is not easy. Similarly, making AI is not easy.
Making a rocket reusable
for the next trip, that's
even harder. Similarly,
making AI is already
hard enough. You want to make AI
reusable, which means it is
literally about transferring
intelligence from one AI
to another AI. So it's like
when you have teaching, let's say you have
taught a child certain skills, but you want this child to be actually able to use the skills in a
totally different context, a different environment, right? So that's called a transferable AI.
More, you want to bring up another kid. You just want to completely transfer the first kid's intelligence to the second kid.
You don't want to retrain the second kid from scratch.
So that's about reusable AI.
So as you can see, if you can enable reusable AI, it saves a tremendous amount of effort,
expertise required to create AI in the first place and more because for usable ai
the time to value has been tremendously increased so now let's do a no code because no code is
really built upon the reusable ai so why people need to do a no-code AI, in this case, it is to have a really special meaning. The meaning is because each AI requires a little bit of customization.
So, for example, to be able to speak the language in that domain,
to be able to communicate with users for a particular task,
you always require some of the customization.
So no-code means that, how can you customize this AI
without writing a line of the code?
Which means it is,
you can directly reuse the intelligence
when we customize the intelligence.
So that's means about the no code, right?
So it's not no code AI,
it's not about the building the AI from scratch.
That would not be actually the purpose of it.
The purpose is, so as you can see in this world, I remember there is a statistic showing that it's
still a very small fraction of the population knew how to code, right? So even for the AI
customization, you want the people who have domain knowledge, but without programming skills to be able to customize the AI
in the way they want. This actually opens up the opportunity for AI to learn better. Think about it
because IT people may not necessarily have all the knowledge. So let's say you wanted to create
an AI assistant to help recruiting or maybe create an AI assistant for healthcare.
You want the domain experts, the subject matter experts to infuse the knowledge into the AI, not the IT people, right?
So the no-code AI and reusable AI really opens up the door for AI to be adopted first, faster, and also for AI to be improved much more rapidly,
which lowers the threshold, the barrier to entry, to enter into the AI field, to adopt AI,
to use AI. I see. I see. I think you summarized it well, especially on the reusable AI side,
we're seeing a growth of foundational models that are being trained on
large corpuses and ultimately being fine-tuned for different specific downstream tasks. And I think
we've seen a lot of performance gains, but it's also minimizing some of the environmental impact
that comes with training large-scale models, because if you're able to apply them to many
fine-tuned downstream scenarios,
then hopefully that's able to reduce the environmental impact. So on the last point,
you mentioned the idea of using no-code AI to enable users to ultimately be able to customize
the AI for scenarios that they're particularly looking to use. Now, critics might argue that while you might take away the programming
from or the code away from programming with no code platforms, you don't necessarily take away
the core logic behind the algorithm design. So this idea of conditionals, loops, and oftentimes
beyond just writing the code itself, solving and architecting a solution for a given problem, performing the test, deploying it is really where a majority of the challenge lies as well.
So how do no-code platforms help address this problem?
And as we segue into Jujie, how do you think about this as a challenge?
Thank you, Brooke, for asking this great question. Actually, you're asking this
question because you have knowledge about computer programming, right? You know about IT, you're
working for Microsoft. And from our clients, most of the people who are subject matter experts
don't have IT background. They don't care about algorithm designs. They don't care about algorithm designs.
They don't care about conditions, loops.
They don't even know they exist, right?
What they care about the most is,
so what this tool helps me to do?
How can I use this tool to achieve my goals?
So that's what Gigi is really actually working hard toward it is how can we explain the AI's function in the way the domain experts,
the subject matter experts can understand, can customize it without getting into the nitty gritty ways of understanding underlying algorithm,
condition loops, or maybe even recursion functions, right?
Because first of all, they don't care.
Second one, doesn't mean anything to them, right?
So I just actually last week,
I gave a talk about the challenges and opportunities
of enabling no-code AI, reusable AI,
for subject matter experts.
You can also call it,
New York Times called it for the masses,
right? There are three challenges, which are very different challenges than programmers
have been facing. So first one, AI design. Because AI is powerful, but it's yet not that powerful, right? So how could you teach people,
understand AI's power,
but in the meantime,
realizing AI's limitations?
That's a huge one for us.
Because if you don't teach people that
and people can actually completely rely on AI
and it turned out it won't work, right?
But then if they realize the AI limitations, they then refuse to use AI.
So let me just give an example, which is very, I observed them, our clients are doing this
one, but once they get more information on the one, they completely change how they design.
So some clients who have some experience working with conversational AI, right?
And they knew that they have this perception,
AI doesn't work very well.
So in their conversation, they always use,
like I call it a button, right?
They will tell you, oh, would you like to continue?
Versus let the user say what they want,
they will give you two buttons, yes or no, right?
Because they didn't know how powerful the ai is right so another one it is
they really want to ask the open-ended questions so for example what kind of a medical conditions
maybe let's say oh did you have when you were a child when you were a child right but you wanted
to elicit a really open-ended but because they didn't know what AI can do, so they basically then limited the answer to say condition one, condition two, and it turned out and maybe it doesn't cover all the possible conditions.
This is a one-way extreme, which means that they don't really trust AI very much.
They wanted to restrict human-AI interactions. human AI interactions, another extreme. People haven't had a lot of experience working with AI
and they would completely trust AI, right?
So they will put in very open-ended questions like,
so what is the biggest challenge in your life right now, right?
And if they don't know if the AI has a lot of limitations,
they are not prepared, I call it if they do not anticipate.
And the users may come in to say this.
So why do you want to ask?
Why do you want to know?
That's very personal.
Users may respond with it.
So what's your biggest challenge, right?
So that's why when we actually gave our tutorial,
we always said it is when you do conversational AI design try
to fill in the gap G means is understand your goals A means is anticipate and P
means is personalize it right know your users personalize it so this case
actually we found it is people gradually tended to trust. So that's one of the challenges.
As you can see, this is very different than teaching people how to program.
It is still programming, but it's teaching them about the limitations,
the scope of AI, if you will.
So that's one.
And another part of it is AI supervision.
When you adopt AI, it's almost like you adopt a child,
adopt the junior assistant.
If you tend to ignore them,
abandon them,
and definitely your users
will abandon your AI assistants
because the knowledge
is not being updated, right?
It's almost like if you ignore your child,
ignore your assistant, they're not going to learn new things.
They're not going to advance their knowledge and their skills.
So that's why we're also trying to actually inform our clients.
When you adopt AI, be prepared that you're taking on a responsibility, actually, seriously, right? So you wanted to keep it updated
and you wanted to really improve it over time.
So that's, as you can see, it's very different.
In some way, there are some similarities,
like almost like your programming, right?
AI supervision is almost like
you have a monitor your program, debug it, right?
But in the world of
no code people may not understand what even debugging mean right but if you if you tell them
it's about supervising your junior assistant supervising your intern supervising your child
they do understand and then they will do that so no code doesn't necessarily mean no responsibility.
I don't know.
Exactly.
You're still responsible for maintaining the AI.
But I think a big part that you mentioned
is the importance of educating folks
on the scope of AI, the capabilities, the limitations,
because I think oftentimes there's a misunderstanding
or a misconception of what can or cannot be achieved by AI.
Correct.
May I also kind of like what I call it is
add one point to this one.
So when you talk about the programmers,
the developers and the machine relationship,
it's really what I call it,
operator-machine relationship
because they program to tell computers
exactly computers should do.
But now, with the no-code AI, it really transforms the relationship between
human and computer. It's from the operator-machine relationship to what I call it a supervisor-
assistant relationship. So you will teach your assistant to do certain things, but you don't need to be very nitty-gritty to the very detail because they don't need it.
They already have a certain level of intelligence already.
That's why we call them AI. the programmers learning some machine language, machine instructions,
versus subject matter experts
learning how to teach AI using no code.
It's a totally different level of abstraction,
different levels of learning.
Certainly.
So focusing on the areas that matter most
to the application area.
ACM ByteCast is available on Apple Podcasts,
Google Podcasts, Podbean, Spotify, Stitcher, and TuneIn.
If you're enjoying this episode,
please subscribe and leave us a review
on your favorite platform.
So I'd love to learn a bit more about,
I know we've sort of established some of the
foundations of NoCodeAI, but I'd love to learn a little bit more about what you're doing at Juju
with cognitive assistance and Juju as a platform for your clients and your users.
So to start off, I'd love to learn a bit more about what led you from a career of research to now a field of entrepreneurship as the co-founder and CEO.
And what exactly is it that you do at Jiuji?
So first of all, as I said earlier, and because at IBM, my co-founder and I, actually my co-founder was also a very key contributor to the project called System U,
later on known as the Watson Personality Insights.
Because of that project, we realized it's such a big space.
There's so many things that we could do, so many challenges we could solve.
So we decided to be entrepreneurs trying to basically get
more freedom to do what we believe
would impact the world.
So this actually leads to today's
Gigi. So in Gigi, we
have our mission
we call it
unifying
machine and human intelligence
to advance humanity.
So I'm a very big believer is that about this,
what do we call it?
Not we call it actually,
JCR,
your MIT professor called it a human computer symbiosis.
So we believe computers will always be a human's assistance,
right?
Not going to replace human, instead augment humans.
So at the GG, what we do is we have actually developed
a new generation of AI assistance.
We call them cognitive AI assistance.
What does it mean?
It means that those AI assistants have certain advanced human soft skills.
So for example, one of the such of a soft skills,
we call it active listening.
Actually, we published the article about this
in the ACM conference, the CHI conference,
which means that is during the conversation,
in the human-to-human conversation,
in order for the conversation to be more effective,
people were taught to actively listen to your partners, right?
To repeat it, to show your empathy, to show your understanding.
That's what we also have taught our AI assistant to do.
Second, when you talk to psychologists, psychologists not only can understand what you're saying,
it's also trying to figure out what you are like, right?
What you are not saying.
So we also have taught our AI, we call it the skill, the soft skill, because the reading
between the lines, which means it is by dynamically analyzing a person's conversational text and trying to figure out what this person is like,
what this person's unique characteristics,
and then you can use that insights to better help this person.
This naturally go into the applications of the cognitive AI assistance,
which we found the sweet spot is there is any type of human engagement,
especially long-term continuous engagements
that are required.
And such engagements are often emotionally charged
and also require quite a bit of social interactions
and more in such interactions,
individuality matters.
As you can see, think about this,
quite a few use cases in this area,
healthcare, right?
Thinking about it,
when somebody is recovering from injury,
recovering from a disease,
it's always long-term, continuous engagement.
It's always emotionally charged
and the social interactions always desired.
Another one is their individuality,
their personality matters to the effectiveness,
to the outcomes of the conversation, right?
Because you can stay on track with their treatment,
whether they drop out.
So if the AI can help them to stay on track,
hey, that's a winner.
Similarly, in the learning,
think about the students take on online program,
normally, typically two to three years,
sometimes two to five years,
long-term engagement,
and it's also emotionally charged, right?
They have to overcome a lot of challenges.
Again, individuality matters because different students have different needs.
Different students have different learning styles.
So you can already see this kind of what I call the sweet spots for cognitive AI.
Workplace companionship is another one.
So I just kind of point out a few of them, which we found found we have discovered as the sweet spot you can see the
killer apps for cognitive AI assistance as we have created of course we enable this no code
reusable which means it is subject matter experts like the healthcare providers learning coordinators, or the HR professionals to customize the AI
assistants on their own and feed them the domain knowledge they have, which maybe IT
people may not have, right?
Most likely won't have.
I see.
So you talked a lot about equipping AI with important capabilities,
for instance, like active listening. But I'm thinking from a user point of view, there's also
a change in norms and how you communicate with a human versus how you communicate with a chatbot.
At least based off my personal experiences, whenever I am on a website, of course, it's not
an advanced cognitive AI assistant, but the many
AI personal assistants that I've interacted with, you think Alexa, you think Siri,
there's some mode of interaction. It's very transactional. You issue some query or some
question or some ask, and you get a response. So what's fundamentally different in sort of
the social norms or the rules that govern
how humans interact with other humans compared to how traditionally humans have been interacting
with machines or chatbots? Do you see that changing? And in the meanwhile, how does that
influence the design of chatbots currently today or with platforms like Jujy? Again, a great question, Brooke.
Thanks for asking that.
There is a book by a professor,
by a psychology professor at Stanford University
already actually talked about
when humans interact with computers,
they tend to follow the very similar social norms
as how they interact with human beings.
We actually use the same principles
to guide the design of our cognitive AI assistants, right?
For example, you mentioned it just a moment ago,
the existing commercial AI assistant like Siri,
like Amazon Alexa or Google Homes,
they are very transactional, right?
They are impersonal, I would say,
because they don't really care about who you are.
So what we have done fundamentally different is
to make the really two-way conversations.
As I said, active listening,
that's coming from a human-to-human communication theory.
Reading between the lines, again, it's from a human-to-human communication theory.
Reading between the lines, again, it's from a human-to-human conversation,
especially from a psychology, computational psychology, from a psychology point of view, right? About the psychology of the human-to-human engagement point of view, right?
So that's how we use those principles from the human-to-human conversations to design, to guide the design of our human-AI conversations.
So that's what I think that will drive not just us, for other companies, for other designers to do the same.
Because people actually use the same social norms
to interact with the machines.
But we did find two things, which is very interesting.
And one is that we always actually educate our clients
to make your AI humble, very humble, right?
Because in this case, it is that people tend to be more forgiving.
Remember, AI is not perfect.
It's far from perfect.
Second, being transparent.
Tell your users what it knows,
what it doesn't know.
Again, it's to gain that sympathy,
to gain that forgiveness from your users, right?
That's a very important one.
Again, you see why we use this principle. It's very similar in human-to-human conversations, right? That's a very important one. Again, you see why we use this principle.
It's very similar in human-to-human conversations, right?
So if you talk to somebody who is humble,
who has the humility, who is very transparent,
you are much more willing to open up.
You're much probably more willing to talk to that person.
It's very similar in the human-to-AI conversation as well. Everybody always like to talk to that person. It's very similar in the human AI conversation as well.
Everybody always likes to talk to the people
who kind of like care about you.
We call it who can think in your shoes, right?
That's similar.
We train the AI to do the same.
If the AI can understand
what are your unspoken needs and wants,
of course, it can help you more.
And where do you think Jiuji is sort of on this journey of achieving the emotional,
empathetic, cognitive AI? Is it something that's here? Is it something that's in the near future?
And more broadly, what are some of the biggest challenges that you're seeing in this space or
opportunities for improvement? It's actually, it's here, right?
It's today.
So our clients have been using it
and they have seen the outcomes and effects.
And what's the challenge here?
As I kind of mentioned, alluded to already,
it is explainable AI, maybe it's a practice.
How can we help the domain experts? Again, they're not
programmers, they're not IT people, they don't have a lot of IT background to discover the magic
of AI and the best use the magic of AI. So for example, to let them know what the power this AI has, and also along with the limitations as well.
Really, I think this work really elevates explainable AI to a next level.
Not just for the data analysts, not for the people who are training, doing machine learning.
This is for the masses, right?
How can you explain to them what is the magic of AI?
What kind of magic your AI has,
let's say it that way.
And how should they use the magic
in their application,
in their solution?
And in the meantime,
to be aware of the AI limitations,
AI imperfection.
So that's one of them.
Second part,
which is maybe the topic
you would like to discuss as well, perfection so that's one of them but second part which is maybe the topic you you will discuss that
you you you would like to discuss as well responsible ai because currently our ai already
can gain a deep understanding of a person during a conversation right so this one can be used for
malicious reason as well so if you knew that this person
it's really likes to play a game right and it could be very easily addicted to games and then
you make a really bad ai to seduce or maybe to allure these people to just play a game every day
every hour right so that's part I'm worried about seriously
because of this democratizing of the AI,
what we have done,
which literally means anybody can come in,
create a very powerful AI
that can understand people's strengths and weaknesses.
Then how do you prevent that from happening?
And is there any kind of, are there any principles
or any measures can we apply, right?
So this AI is about responsible AI and AI ethics.
This is in our community we have talked about now,
especially now with the power of AI all the time.
Certainly, yeah.
I think we've seen without without a doubt, many case
studies or examples over the past decade where we've seen some of the negative consequences of AI.
Even in the cognitive assistance space, we've seen cases where AI has continued to
engage with users and result in model drift. So I think it's really great to hear that at least some importance is being put around
responsible AI principles as you're looking to democratize AI capabilities for the masses.
So I appreciate you addressing that before I even got to the question.
So it's great to see that it's a top area of priority.
I want to focus the sort of last segment of this talk around future directions.
So if you're familiar, one of the things that was recently announced by GitHub is Copilot, which is the AI tool pair programmer.
Now, from my point of view, Copilot and no code are fundamentally different things.
However, they do have some commonalities.
You know, they're focused on boosting development.
They're focused on democratizing computing.
So what are your thoughts on the future of software now that we have the introduction
of tools like Copilot?
And how do you see this reconciling with the no-code movement?
I haven't actually used the Copilot myself.
I did watch some of the demos, right?
Actually, I thought it's a very cool tool.
Because it's a tool, just like our no-code AI,
reusable AI design studio is another tool, right?
It really depends on the audience, depends on the users of the tools.
So I would say tools like a co-pilot will advance as well,
but their main users would be developers and programmers.
And the low-code AI, like what we are developing,
their main audience would be the people who don't know programming, right?
Who are, as I said, subject matter experts or domain experts.
So I would say both of the tools would be needed.
Actually, we're even right now contemplating the tool.
It is when we're talking about no-code AI, right?
So how about to use the conversation to actually design
a conversational AI?
That's from our platform.
That's even right now
can be totally supported.
So which means that
you don't even need
the GUI anymore.
You can just say,
hey, what kind of AI system
would you like to build?
Let me walk you through.
Let me help you create one, right?
It's basically
using the conversation
to design a
conversational assistant so you can see it's all full of possibilities but depending on who the
users of the tools i think all these different types of tools are needed because they're always
programmers they'll always be actually developers so they't need tools like a co-pilot.
Other people will need the tools like ours.
Yeah, I think that's a very great distinction to make the sort of end user of the audience,
target audiences is different between those two tools.
That's awesome.
The next thing I want to actually pick your brain on
is future directions for Juju specifically.
For me, I am quite familiar with Scratch. I'm not
sure if you're familiar with it. It's a free website or interactive coding platform that was
developed by the Lifelong Kindergarten Group at MIT Media Lab a couple years back, which has been
a very powerful tool for introducing computing at an early age to,
you know, kids all around the world. So I'm curious with, you know, platforms now like Juju,
do you see applications in early education? And more broadly, do you see potential for
no-code AI to transform, you know, adoption and exposure to early education around AI? Thank you, Brooke, again for this awesome question.
Actually, there are high schools and universities
that have already begun to use Gigi as a platform to teach AI.
What's a great aspect of that one is,
for example, San Jose State University, right? So they are a business school
to teach AI. So they wanted to teach non-STEM students, the business school students,
about the core concept of AI. They use Gigi as a platform. I also see high schools in Cambodia
and use Gigi to teach their students about the core concept of AI as well.
So I definitely, so we actually support them this effort, right?
And this means it is really open up the space for computer science education.
So traditionally thinking about it is non-STEM students who cannot program,
oh, they wouldn't know what AI is about. But now the tools like Gigi really changed that.
So people actually writing their resume, I have used the AI tools. I created this AI and I
accomplished this task. So it's like they write down their skills, like using Microsoft PowerPoint or Excel. Now
they said, I can use the tool like Gigi to create a conversational AI agents, right? AI assistants.
So that makes me actually, you know what I mean? Running startup is really hard,
but those kinds of use cases really made me smile at night. Oh, I'm sure. I'm sure. And I think, yeah, like you said, early education, early exposure can
really be transformational for opening up a whole new world of knowledge, of experience,
of career opportunities. So I think hearing about some of the applications of Jiu-Jitsu and
transforming early education is certainly very exciting.
One thing I want to touch on, I'm sure you've seen the recent headlines around
the Google AI bot becoming sentient. So as someone working in this cognitive AI assistant space,
and having done research in this space for quite some time, What are your thoughts on if and when,
if at all, we will ever achieve sentient AI?
I guess I would take a step back to ask the question,
why do we want to make a sentient AI?
So it really depends on a purpose
of the AI's use case and AI applications.
How great would it be you have an AI of the AI's use case and AI applications,
how great would it be you have an AI which has no emotion, seriously, right?
Which means that this AI will not have any emotional burdens
as we humans have.
But in the meantime, to exhibit empathy,
exhibit empathetic behavior.
Would that be great?
Why would it require AI to have the emotions to feel anything at all as long as they can exhibit empathetic behavior, right?
So in this case, what I would say is one,
because we humans create AI,
we want to really best leverage AI's strengths
and avoid AI's weaknesses.
Similarly, we want to best leverage human strengths
and avoid human's weaknesses.
So which means it is when we make AI,
we don't want to teach AI something we don't want AI to have.
Because you know how hard it is about this.
For example, I read somewhere about the call center workers, especially in the 911 call centers, right?
They are under so much emotional stress because of the calls, because at least we called them,
like EMT work is the same thing, right?
That's because they're humans.
They have feelings.
How great it is.
We have AI can do all of that
and it can be empathetic still to show that behavior,
but without having the emotional burdens.
I think you posed a really good question.
Why would we ever want AI to...
Be sentient, yeah. How can it benefit us? I guess I'm maybe a human-centric person, right? I always thought about how would AI help us humans? How would AI could help us advance
humanity? So if we're standing from this
point of view, then we knew
what we want to give to AI, what we don't
want to give to AI. For the
sake of the world,
we are the makers. The makers
should make decisions on what you want to make.
Well said.
To end off,
I guess I would like to
leave broadly with an open question.
As someone who has been working in research for quite some time and now is pioneering
such a great field with democratizing AI and cognitive AI systems, what are some of the
future directions, both opportunities and challenges that you see in the cognitive assistant
or AI space more broadly?
What excites you? What excites you?
What excites me also scares me. It's the same thing. It's really this power of AI,
this democratization of the power of AI gives humans tremendous power, tremendous augmentation, right?
To do things which we can't do easily in the past.
But in the meantime, it's like a cliche, but it's real.
With this great power comes with great responsibility.
And how should we handle that?
And how should we actually better handle our responsibility
of having this very powerful tool in our hands.
So that's why I'm very excited to see it.
Wow, this is amazing.
We can do almost magical.
But in the meantime,
it also means that it could do great harm as well.
And how can we prevent that from happening before it happens?
With great power comes great responsibility.
I think with folks like you pioneering the field,
I think we are in good hands.
Thank you.
Thank you so much, Dr. Michelle Zhou,
for taking time to speak with us at ACM ByteCast.
Very much enjoyed the conversation. Thank you, Dr. Michelle Zou, for taking time to speak with us at ACM ByteCast. Very much enjoyed the conversation. Thank you, Brooke. ACM ByteCast is a production of the Association for Computing
Machinery's Practitioner Board. To learn more about ACM and its activities, visit acm.org. For more information about this and other episodes, please visit our website
at learning.acm.org slash b-y-t-e-c-a-s-t. That's learning.acm.org slash ByteCast.