Microsoft Research Podcast - Collaborators: Teachable AI with Cecily Morrison and Karolina Pakėnaitė

Episode Date: December 5, 2023

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft ...Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.In this episode, Dr. Gretchen Huizinga speaks with Cecily Morrison, MBE, a Senior Principal Research Manager at Microsoft Research, and Karolina Pakėnaitė, who also goes by Caroline, a PhD student and member of the citizen design team working with Morrison on the research project Find My Things. An AI phone application designed to help people who are blind or have low vision locate their personal items, Find My Things is an example of a broader research approach known as Teachable AI. Morrison and Pakėnaitė explore the Teachable AI goal of empowering people to make an AI experience work for them. They also discuss how “designing for one” when it comes to inclusive design leads to innovative solutions and what they learned about optimizing these types of systems for real-world use (spoiler: it’s not necessarily more or higher-quality data).Learn more:Teachable AI Experiences (Tai X) | Project pageUnderstanding Personalized Accessibility through Teachable AI: Designing and Evaluating Find My Things for People who are Blind or Low Vision | Publication, October 2023Microsoft Inclusive Design | Inclusive design resource centerDeafBlind Everest Project | Karolina (Caroline) Pakėnaitė personal website

Transcript
Discussion (0)
Starting point is 00:00:00 One of the things about teachable AI is that it's not about the AI system. It's about the relationship between the user and the AI system. And the key to that relationship is the mental model of the user. They need to make good judgments about how to give good teaching examples if we want that whole cycle between user and AI system to go well. You're listening to Collaborators, a Microsoft research podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I'm Dr. G. Cecily Morrison, MBE, a Senior Principal Research Manager at
Starting point is 00:00:56 Microsoft Research, and Carolina Picanete, a PhD student and a participant on the citizen design team for the teachable AI research project, Find My Things. Cecily and Carolina are part of a growing movement to bring accessible technologies to people with different abilities by closely collaborating with those communities during research and development. Cecily, Carolina, welcome to Collaborators. Thank you, Gretchen. Before we hear more about Find My Things, let's get to know the both of you. And Cecily, I'll start with you. Give us a brief overview of your background, including your training and expertise, and
Starting point is 00:01:35 what you're up to in general right now. We'll get specific shortly, but I just want to have sort of the umbrella of your raison d'etre or your reason for research being, as it were. Sure. I'm a researcher in human-computer interaction with a very specific focus on AI and inclusion. Now, this for me brings together an undergraduate degree in anthropology, understanding people, a PhD in computer science, understanding computers and technology, as well as a life role as a parent of a disabled child. And I'm currently leading a team that's really trying to push the boundaries of what's possible in human AI interaction,
Starting point is 00:02:11 and motivated by creating technologies that lead us to a more inclusive world. As a quick follow-up, Cecily, for our non-UK listeners, tell us what MBE stands for, and why you were awarded this honor. Yes, MBE. I also had to look it up when I first received the award. It stands for Member of the British Empire, and it's part of the UK honor system. My MBE was awarded in 2020 for services to inclusive design. Now, much of my career at Microsoft Research has been dedicated to innovating inclusive technology
Starting point is 00:02:47 and then ensuring that it gets into the hands for those whom we made it for. Right. Was there a big ceremony? Things were a little bit different during the COVID times, but I did have the honor of going to Buckingham Palace to receive the award. And it was a wonderful time bringing my mother
Starting point is 00:03:03 and my manager of the important women around me who've made it possible for me to do the award. And it was a wonderful time bringing my mother and my manager of the important women around me who've made it possible for me to do this work. That's wonderful. Well, Carolina, let's talk to you for a minute here. You're one of the most unique guests we've ever had on this podcast. Tell us a bit about yourself. Obviously, we'd like to know where you're studying and what you're studying. But this would be a great opportunity to share a little bit about your life story, including the rare condition that brought you to this collaboration. Thank you so much again for having me. What an amazing opportunity to be here on the podcast.
Starting point is 00:03:35 So I'm a PhD student at the University of Bath, looking into making visual photographs accessible to text. Maybe you can tell from my speech is that I am deafblind. So I got diagnosed with Dyslexia Syndrome Type 2 Ray at the age of 19, which means that I was born hard of hearing, but then started to recite around my early 20s. It has been a journey accepting this condition, but it also brought me some opportunities, like becoming part of this collaboration for Microsoft Research Project. Carolina, a quick follow-up for you. Because of the nature of your condition,
Starting point is 00:04:26 you've encountered some unique challenges, one of which made the news a couple of years ago. Can you talk a little bit about how perceptions about people with varying degrees of disability can cause skepticism both from others and, in fact, as you've pointed out yourself? What can we learn about this here? Yes.
Starting point is 00:04:44 So I have experienced many misunderstandings In fact, as you've pointed out yourself, what can we learn about this here? Yes. So I have experienced many misunderstandings, and I know I'm not alone. So I have a general reason of progressive conditions at a stage where my stressors have registered me as blind instead of party sighted. My frontal sight is still excellent, so that means I can still make eye contact, read books, do photography. Some people even tell me that I don't look blind, but does that even mean? So since my early 20s, I became very, very clumsy. I stepped over children, walked into elderly, stepped on cattails. I've been too lonely in this car accident so my brain no longer processes the world in the same way as before. But for the longest time in my sightless journey, I felt like I had an imposter syndrome,
Starting point is 00:05:54 being completely skeptical about my own diagnosis, despite the scrumly experiences of social factors and genetic confirmation. I think the major reason is because of a lack of representation of the blind community in the media. Blindness is not black and white. Statistically, most of us have some remaining vision. Disability is not about having a certain look, which also applies to people with some form of visual impairment. I love it how I can have so many more new Instagrammers and YouTubers who are just like me, but I still think there is a long way to go before having disability representation becoming a norm for great understanding and proficiency. You know, I have to say, this is a great reminder that there is a kind of a spectrum of ability and that we should be gracious to people as opposed to critical of them. So
Starting point is 00:07:08 thank you so much for that understanding that you bring to this, Carolina. Before we get into specifics of this collaboration, and that's what we're here for on this podcast, I think the idea of teachable AI warrants some explication. So Cecily, what is teachable AI and why is it an important line of research, including its applications in things like Find My Things? Gretchen, that's a great question. Teachable AI enables users to provide examples or higher level constraints to an AI model in order to personalize that AI system to meet their own needs. Now, most people are familiar with personalization. Our favorite shopping site or entertainment service offers us personalized suggestions, but we don't always have a way to
Starting point is 00:07:56 shape those suggestions. So you can imagine it's pretty annoying, for example, if you keep being offered nappies by your favorite shopping service because you've been buying them for a friend, but actually you don't have or even plan to have a baby. So now Teachable AI gives us the user agency in personalizing that AI system to make a choice about what are the things you want to be reflected in yourself, your identity, when you work or interact with that AI system. Now, this is really important for AI systems that enable inclusion. So if we consider disability to be a mismatch between a person's capabilities and their environment, then AI has a really significant role to play in reducing that mismatch. However, as we were working on this, we soon discovered that the number of potential mismatches between a person and their environment
Starting point is 00:08:43 is incredibly large. I mean, it's like the number of potential mismatches between a person and their environment is incredibly large. I mean, it's like the number of stars, right? Right, right. Because disability is a really heterogeneous group. But then we say, oh, well, let's just consider people who are blind. Well, as Carolina has just shown us, even people who are blind are very, very diverse.
Starting point is 00:09:00 So there are people with different amounts of vision or different types of vision. People have different experiences of the world with vision or without people can lose their vision later in life they can be born blind people have different personalities some people are happy to go with whatever some people not so much people are from different cultures maybe they they are used to being in an interdependent context other people might have intersecting disabilities like deafblindness and have, again, its own set of needs.
Starting point is 00:09:28 So as we got into building AI for accessibility and AI for inclusion more generally, we realized that we needed to figure out how can we make AI systems work for individuals, not quote unquote people with disabilities. So we focused on teachable AI so that each user could shape the AI system to work for their own needs as an individual in a way that they choose, not somebody else. So Find My Things is a simple but working example of a teachable AI system. And in this example, people can personalize a object finder or object detector for the personal items that matter to them. And they can do this by taking four videos of that personal item that's important to them and then training on their phone
Starting point is 00:10:12 a little model that will then recognize those items and guide them to those items. So you might say, well, recognizing objects with phone, we can do that now for a couple of years and that's very true. But much of what's been recognizable wasn't necessarily very helpful for people who are blind and low vision. Now, it's great if you can recognize doors, chairs, but carnivores and sombrero hats, you know, perhaps this is less handy on a day-to-day basis. But your own keys, your friend's front door, your guide cane, maybe even the TV remote that somebody's always putting somewhere else. I mean, these are the things that people want to keep track of. And each person has an own set of things that they want. So the Find My Things research prototype allows people to
Starting point is 00:10:54 choose what they want to train or to teach to their phone and then be able to teach it and to find those things. Okay, so just to clarify, I have my phone, I've trained it to find certain objects that I want to find. What's the mechanism that I use to say, you know, do you just say, find my keys and your phone leads you there through beeps or, you know, Marco Polo, closer, warmer? How does it work? Well, that's a great question. So you then have a list of things that you can find. So for most people, there's five or 10 things that are pretty important to them. And then you would find that. Then you would scan your phone around the room.
Starting point is 00:11:34 And you need to be within sort of four to six meters of something that you want to find. So if it's in your back studio in the garden, it's not going to find it. It's not telepathic in that regard. It's a computer vision system using vision. If it's underneath your sofa, you probably won't find it either. But we found that with all things human-AI interaction, we rely on the interaction between the person and the AI to make things work. So most people know where things might be.
Starting point is 00:12:02 So if you're looking for a TV remote, it's probably not in the bathtub, right? It's probably going to be somewhere in the living room. But you know, your daughter or your brother or your housemate might have dropped it on the floor. They might have accidentally taken it into the kitchen, but you'll probably have some good ideas of where that thing might be. So this is then going to help you find it a little bit faster so you don't need to get on your hands and knees and feel around to where it is. Gotcha. The only downside of this is find my phone, which would help me find my things. Anyway, that's all.
Starting point is 00:12:35 I think Apple has solved that one. They do. They have an app, find my phone. I don't know how that works. Well, listen, let's talk about the collaboration a bit and talk about the meetup, as I say, on how you started working together. I like to call this bit How I Met Your Mother because I'm always interested to hear each side of the collaboration story. So, Carolina, why don't you take the lead here and then Cecily can fill in the blanks from her side on how you got together. Yes, I found this opportunity to join this celebration for Microsoft Research Project as a
Starting point is 00:13:12 citizen designer through an email newsletter from my charity, LinkedIn. On the newsletter, it looks like it was organized in a way where you are way more than just a participant for another research project. Like, yeah, it looks like an amazing opportunity to actually get some experiences and skills for gaining just as much as giving. So, yeah, I thought that I shouldn't miss it out. So you responded to the email, yeah, I'm in. Cecily, what was going on from your side? How did you put this out there with this charity and bring this thing together?
Starting point is 00:13:55 So VICTA is a fantastic charity in the UK that works with blind and low vision young people up to the age of 30. And they're constantly trying to bring educational and meaningful experiences to the people that they serve. And we thought this would be a great moment of collaboration where we could bring an educational experience about learning how to do design and they could help us reach out to the people who might want to learn about design and might want to be part of this collaboration. So Carolina was one of many. How many other citizen designers on this project did you end up with? Oh, that's a great question. We had a lot of interest, I do have to say. And from there, we selected eight citizen designers from around the UK who were willing
Starting point is 00:14:36 to make the journey to Cambridge and work with us over a period of almost six months. People came up to us about monthly, although we did some virtual ones as well. Well, Cecily, let's talk about this idea of citizen designers. I like that term very much. Inclusive design isn't new in computer human interaction circles or human computer interaction circles. And you already operate on the principle of nothing about us without us. So tell us how the concept of citizen designer is different and why you think citizen designers take user input to another level. Sure. I think citizen designer is a really interesting concept and one that we need more of. But let me first start with inclusive design and how that brings us to think about citizen designers.
Starting point is 00:15:26 So inclusive design has been a really productive innovation tool because it brings us unusual constraints to the design problem. Within the Microsoft inclusive design toolkit, we refer to this as designing for one. And once you've got this very novel design that emerges, we then optimize it to work for everyone or we extend it to many. So this approach really jogs the mind to radical solutions. So let me give you just one example. In years past, we developed a physical coding language to support blind and sighted children to learn to code together. So we thought, oh, okay, sighted children have blocks on a screen, so we're going to make blocks on a table. Well, our young design team lined up the blocks on the table, put their hands in their lap,
Starting point is 00:16:16 and I looked at them and I thought, we failed. So we started again and we said, okay, show us, and we worked with them to show us what excites the hands. You know, here are kids who live through their hands. You know, what are the shapes? What are the interactions? What are the kinds of things they want to do with their hands? And through this, we developed a completely different base idea and design. And we found that it didn't just excite the hands of children who are blind or low vision, but it excited the hands of all children.
Starting point is 00:16:37 They had brought us their expertise in thinking about the world in a different way. And so now we have this product Code Jumper, which kids just can't put down. So that's great. So we know that inclusive design is going to generate great ideas. We also know that diverse teams generate the best ideas because diverse life experience can prompt us to think out of the box. But how do we get diverse teams when it can be hard for people with disabilities to enter the field of design and technology. So design assumes often good visual skills, it assumes the ability to draw, and that can knock out a lot of people who might be great at designing technology experiences without those
Starting point is 00:17:15 skills. So with our citizen design team, we wanted to open up the opportunity to young people who are blind and low vision to really set the stage for them to think about what would a career in technology design be like? Could I be part of this? Can I be that generation who's going to design the next cohort of accessible, inclusive technologies? So we did this through teaching key design skills, like the design process itself, prototyping, as well as having this team act as full members of our own R&D team, so in an apprenticeship style. So our citizen designers weren't just giving feedback as participants might, but they were creating prototypes, running A-B tests.
Starting point is 00:17:54 And it was our hope, and I think we succeeded in making it a give-give situation. We were giving them a set of skills, and they were giving us their design knowledge that was really valuable to our innovation process. That is so awesome. I'm just thinking of the sense of belonging that you might get instead of being, as Carolina kind of referred to, it's not just another user research study where you'll go and be part of a project that someone else is doing. You're actually integrally connected to the project. And on that note, Carolina, talk a little bit about what it's like to be a citizen designer. What were some of your
Starting point is 00:18:32 aha moments on the project? Maybe the items that you wanted to be able to find and what surprises you encountered in the process of developing a technique to teach a personal item? Yes, so it was incredibly fascinating to play the role of a citizen designer and testing a piece of the layout to use and providing server comments. It took me a bit of time to really understand how the school is different from existing ones, but then I realized it's literally in the name, a teachable AI tool.
Starting point is 00:19:14 So it's a tool designed for teaching you about your very own personal items. Yeah, your items may not look like a typical standard item, maybe personal items, then the engravings or stickers or maybe it's a unique gadget or maybe it's a medical devices. It's not about teaching every single item that you earn, but rather it's a tool that learns to identify what matters most to you. So yeah, I have about five to ten small personal items that I always carry with me and most of them are like very very very important to me like losing a bus pass means I can't get anywhere, losing a key means I can't get home because these items are small and I use them daily. That means they are also being lost most commonly. So now I have a tool that is able to locate my personal items if they happen to be lost.
Starting point is 00:20:40 Right. And as you said earlier, you do have some sight. It's tunnel vision at this point. So the peripheral part is more challenging for you. But having this tool helps you to focus in a broader spectrum of a visual sight. great time to get a bit more specific about your Teachable AI discovery process. Tell us some research stories. How did you go about optimizing this AI system? And what things did you learn from both your successes and your failures? Yes, lots of research stories with this, I'm afraid. But I think the very first thing we did was, okay, a user wants to teach this system. So we need to tell the user what makes a good teaching example. Well, we don't know. Actually, we assumed we did know because in
Starting point is 00:21:30 machine learning, the idea is more data, better, quote unquote, quality data, and the system will work better. So the first thing that really surprised us when we actually ran some experimental analysis was that more data was not better and higher quality data or data that has less blur or is perfectly framed was also not better. So what we realized is that it wasn't our aim to kind of squeeze as much data as we could from the users, but really to get the data that was the right kind of data. So we did need the object in the image. It's really hard to train a system to recognize an object that's not there at all. But what we needed was data that looked exactly like what the user was going to use
Starting point is 00:22:13 when they were finding the object. So if the user moves the camera really fast and the image becomes blurry, then we need those teaching examples to have blur too. So it was in understanding this relationship between the teaching examples and the user that really helped us craft a process that was going to help the user get the best result from the system. One of the things about teachable AI
Starting point is 00:22:35 is that it's not about the AI system. It's about the relationship between the user and the AI system. And the key to that relationship is the mental model of the user. They need to make good judgments about how to give good teaching examples if we want that whole cycle between user and AI system to go well. So I remember watching Carolina taking her teaching frames and she was moving very far away. And I was thinking, hmm, I don't think that data is going to
Starting point is 00:22:59 work very well because there's just not going to be enough pixels of the object to make a good representation for the system. So I asked Carolina about her strategy and she said, well, if I wanted to work from far away, then I should take teaching examples from far away. And I thought, ah, that's a very logical mental model. But unfortunately, we've broken the user's mental model because that's not actually how the system works because we were cropping frames and taking pixels out and doing all kinds of fancy image manipulation to actually to improve the performance under the hood. So I think this was an experience where we thought, ah, we want the user to develop a good mental model. But to do that, we need to actually structure this teaching
Starting point is 00:23:34 process. They don't need to think so hard. And we're guiding them into the kinds of things that make the system work well, as opposed to not. And then they don't need to guess. So the other thing that we found was that teaching should be fast and easy. Otherwise it's just too much work. No matter how personalized something is, if you have to work too hard, it's a no-go. So we thought, oh, we want this to be really fast. We want to take as few frames as possible. And we want the users to be really confident that they've got the object in the frame, because that's the one thing we really need. So we're going to tell them all the time, if the object's in in frame, it's in frame, it's in frame, it's in frame, it's in frame, it's in frame, it's in frame.
Starting point is 00:24:07 Well, there's citizen designers, including Carolina came back to us and said, you know, this is really stressful. I'm constantly worrying, is it in frame? Is it in frame? Is it in frame? And actually the cognitive load of that, even though we're trying to make the process really, really easy, was really overwhelming. And one of them said to us,
Starting point is 00:24:26 well, why don't I just assume that I'm doing a good job unless you tell me otherwise? And that really helps shift our mindset to say, well, okay, we can help these by giving them a gentle nudge back on track, but we don't need to grab all their cognitive attention to make the perfect video. That's so hilarious. Well, Cecily, I want to stay with you for a minute and discuss the broader benefits of what you call designing outside the mean. And despite the challenges of developing technologies, we've seen specialized research deliver the so-called curb cut effect over and over. And you've already alluded to this a bit earlier, but clearly people with blindness and low vision aren't the only ones who can't find their things. So might this research help other people? Could it be
Starting point is 00:25:11 something I could incorporate into my phone? That's a great question. And I think an important question when we do any research is how do we broaden this out to meet the widest need possible? So I'm going to think about, rather than find my thing specifically, I'm going to think about teachable AI. And teachable AI should benefit everybody who needs something specific to themselves. And who of us don't think that we need things to be specific to ourselves in this day and age?
Starting point is 00:25:36 But it's going to be particularly useful for people on the margins of technology design for many reasons. So it doesn't matter, it could be where your home is different or the way you go about your daily lives or perhaps the intersection of your identities. By having teachable AI, we make systems that are going to work for individuals, regardless of the labels that you might have or the life experience you might have. Well, you want an AI system that
Starting point is 00:25:57 works for you. And this is an approach that's moving us in that direction. You know, I love, I remembered what you said earlier, and it was for individuals, not people with disabilities. And I just love that framing anyway, because we're all individuals, and everyone has some kind of a disability, whether you call it that or not. So I just love this work so much. Carolina, back to you for a minute. You have said you're a very tactile person. What role does haptics, which is the touch, feel part of computer science, play for you in this research? And how do physical cues work for you in this technology? Well, yeah. So, because I'm deaf-blind, I think my brain naturally creates the information through senses which I have full access to.
Starting point is 00:26:48 For me it's text. So I find it very stimulating when the tools are but I think it's also a good accessibility cue as well. For example, one big instance happened that a citizen designer was pointing my camera at an object and being hard of hearing, that means I couldn't hear what it was saying so I had to bring it close to my ear and that meant that the object was lost in the camera view. Right. So yeah, I think having tactile cues could be very beneficial for people like me who are deafblind, but also others.
Starting point is 00:27:50 Like, for example, you don't always want your phone to be on sound for the time that maybe in a quiet train, in a quiet tube. You don't want your phone to stop talking. You might be feeling self-conscious. So yeah, I think. Right. a tube. You don't want your phone to stop talking. You might be feeling subconscious. So, yeah, I think always adding those tactile cues will benefit me and everyone else. Yeah, so to clarify, is haptics or touch involved in any of this particular teachable AI technology, Cecily? I know that
Starting point is 00:28:26 Carolina has that as a, you know, a want to have kind of thing. Where does it stand here? Yeah, no, I think Carolina's participation was actually fairly critical in us adding vibration cues to the experience. Yeah, so it does use the haptic. We use auditory, visual, and vibration as a means of interaction. And I think in general, Yeah. So it does people to be as flexible as possible for this context and for their own needs to make an experience work for them. Right? Yeah. And I feel like this is already kind of part of our lives when our phones buzz or, you know, vibrate, or when you wear the watch that gives you a little tip on your wrist, that you've got a notification or you need to turn left or whatever you're using
Starting point is 00:29:26 it for. Cecilia, I always like to know where a project is on the spectrum from lab to life, as we say on this show. What's the status of Teachable AI in general and Find My Things in particular? And how close is it to being able to be used in real life by a broader audience than your citizen designers and your team? So it's really important for us that the technologies we research become available to the communities to whom they are valuable. And in the past, we've had a whole set of partners, including Seeing AI, American Printing House for the Blind, to help us take ideas, research prototypes, and make them into products that people can have. Now, Teachable AI is a grand vision. I think we are showed with this work and Find My Things that the machine learning is there.
Starting point is 00:30:14 We can do this and it's coming. And as we move into this new era of machine learning with these very large models, we're going to need it there too because the larger the model, the more personalized we're probably going to need the experience. In terms of Find My Things, we are also on that journey to finding the right opportunity to bring it out to the blind community. So this has been fascinating. There's so many more questions I want to ask, but we don't have
Starting point is 00:30:38 a lot of time to ask them all. I'm sure that we're going to be watching as this unfolds and probably becomes part of all of our lives at some point, thanks to the wonderful people doing the research. I like to end the podcast with a little future casting from each of my guests. And Carolina, I'd like you to go first. I have a very specific question for you. Aside from your studies and your research work, you've said you're on a mission. What's that mission? And what does Mount Everest have to do with it? So firstly, I'm hoping to complete my PhD this year. That's my big priority for this year.
Starting point is 00:31:17 And then I will be on a mission. And this is one that I feel a little bit nervous to share, but also very excited. As an adventurer at heart, my dream is to summit Mount Everest. So before it always seemed like a fantasy, but I recently came back from a space stamp set just a few months ago and I met some mountaineers who were on their way to the top and I found myself quietly saying what if and then as I was thinking how I'm slowly losing my sight I realized that if I do to be now or never. So when I came back, I decided I'd just make some action. So we started two different organizations,
Starting point is 00:32:40 and surprisingly, it prediction. She is eager to document this journey and yeah, it seems like something might be happening. So this mission isn't just about me potentially becoming the first deaf blind person to submit letters, but also a commitment to raising awareness and providing representation for the blind and deafblind community. I hope to stay in the research world, and I believe this mission has some potential for research. So I think, for example, I'm looking for accessibility tools for me to climb Everest so that I can be the best climber I can be as a deafblind person being independent of independent part of the team, or maybe make a documentary film, a multi-sensory experience accessible to a wider community, including deaf lives.
Starting point is 00:33:56 So, yeah, I'm actively looking for collaborations and would love to be contacted by anyone. I love the fact that you are bringing awareness to the fact, first of all, that the DeafBlind community or even the blind community isn't a one-size-fits-all. So, yeah, I hope you get to summit Everest to be able to see the world from the tallest peak in the world before anything progresses that way. Well, Cecily, I'd like to close with you. Go with me on a little forward thinking, backward thinking journey. You're at the end of your career looking back. What have you accomplished as a researcher and how has your work disrupted the field of accessible technology and made the world a better place? Where would I would like to be? I would say more like where would we like to be?
Starting point is 00:34:47 So in collaboration with colleagues, I hope we have brought a sense of individual's agency in their experience with AI systems, which allow people to shape them for their own unique experience, whoever they might be and wherever they might be in the world. And I think this idea
Starting point is 00:35:05 is no less important, or perhaps it's even more important, as we move into a world of large foundation models that underpin many or perhaps all of our experiences as we go forward. And I think particularly large foundation models will be a really significant change to accessibility, and I hope the approach of teachability will be a significantly positive influence in making those experiences just what we need them to be. And I have to say, in my life role, I'm personally really very hopeful for my own blind child's opportunities in the world of work in 10 years time. At the moment, only 25% of people who are blind or low vision work. I think technology can play a huge role
Starting point is 00:35:45 in getting rid of this mismatch between the environment and a person and allowing many more people with disabilities to enjoy being in the workplace. This is exciting research and really a wonderful collaboration. I'm so grateful, Cecily Morrison and Carolina Pakenete for coming on the show and talking about it with us today. Thank you so much. Thank you, Gretchen. And thank you, Carolina.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.