Right About Now with Ryan Alford - You Might Also Like: Smart Talks with IBM

Episode Date: December 10, 2024

Introducing Responsible AI: Why Businesses Need Reliable AI Governance from Smart Talks with IBM.Follow the show: Smart Talks with IBMTo deploy responsible AI and build trust with customers, businesse...s need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: ibm.com/smarttalks Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.DISCLAIMER: Please note, this is an independent podcast episode not affiliated with, endorsed by, or produced in conjunction with the host podcast feed or any of its media entities. The views and opinions expressed in this episode are solely those of the creators and guests. For any concerns, please reach out to team@podroll.fm.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, Malcolm Gladwell here. I'm back in your feed today because we are re-releasing an episode of Smart Talks with IBM on a very timely topic, AI Governance and Why Regulation is Critical to Building Responsible and Accountable AI. I hope you enjoy it. Hello, hello. Welcome to Smart Talks with IBM, a podcast from Pushkin Industries, iHeart Radio, and IBM. I'm Malcolm Gladwell. This season, we're continuing our conversation
Starting point is 00:00:34 with new creators, visionaries who are creatively applying technology and business to drive change, but with a focus on the transformative power of artificial intelligence and what it means to leverage AI as a game-changing multiplier for your business. Our guest today is Christina Montgomery, IBM's chief privacy and trust officer. She's also chair of IBM's AI ethics board. In addition to overseeing IBM's privacy policy, a core part of Christina's job involves AI governance. Making sure the way AI is used complies with the international legal regulations customized for each industry. In today's episode, Christina will explain why businesses need
Starting point is 00:01:22 foundational principles when it comes to using technology, why AI regulation should focus on specific use cases over the technology itself, and share a little bit about her landmark congressional testimony last May. Christina spoke with Dr. Laurie Santos, host of the Pushkin podcast, The Happiness Lab. A cognitive scientist and psychology professor at Yale University, Laurie is an expert on human happiness and cognition. Okay, let's get to the interview. So Christina, I'm so excited to talk to you today. So let's start by talking a little bit about your role at IBM. What does a chief privacy and trust officer actually do? It's a really dynamic profession and it's not a new
Starting point is 00:02:10 profession, but the role has really changed. I mean, my role today is broader than just helping to ensure compliance with data protection laws globally. I'm also responsible for AI governance. I co-chair our AI ethics board here at IBM, and for data clearance and data governance as well for the company. So I have both a compliance aspect to my role, really important on a global basis, but also help the business to competitively differentiate because really trust is a strategic advantage for IBM and a competitive differentiator as a company that's been responsibly managing the most sensitive data for our clients for more than a century now
Starting point is 00:02:51 and helping to usher new technologies into the world with trust and transparency. And so that's also a key aspect of my role. And so you joined us here on Smart Talks back in 2021 and you chatted with us about IBM's approach of building trust and transparency with AI. And that was only two years ago, but it almost feels like an eternity has happened
Starting point is 00:03:11 in the field of AI since then. And so I'm curious how much has changed since you were here last time. Were the things you told us before, you know, are they still true? How are things changing? You're absolutely right. It feels like the world has changed, really,
Starting point is 00:03:25 in the last two years. But the same fundamental principles and the same overall governance apply to IBM's program for data protection and responsible AI that we talked about two years ago. And not much has changed there from our perspective. And the good thing is we've put these practices and this governance approach into place and we've have
Starting point is 00:03:49 an established way of looking at these emerging technologies as the technology evolves. The tech is more powerful for sure, foundation models are vastly larger and more capable and are creating in some respects new issues, but that just makes it all the more urgent to do what we've been doing and to put trust and transparency into place
Starting point is 00:04:09 across the business to be accountable to those principles. And so our conversation today is really centered around this need for new AI regulation. And part of that regulation involves the mitigation of bias. And this is something I think about a ton as a psychologist, right? I know like my students and everyone who's interacting with AI is assuming that the kind of knowledge that they're getting from this kind of learning is accurate, right? But of course, AI is only as good
Starting point is 00:04:34 as the knowledge that's going in. And so talk to me a little bit about like why bias occurs in AI and the level of the problem that we're really dealing with? Yeah, well, obviously AI is based on data, right? It's trained with data, and that data could be biased in and of itself, and that's where issues could come up. They come up in the data. They could also come up in the output of the models themselves. So it's really important that you build bias consideration and bias testing into your product development cycle. And so what we've been thinking about here at IBM and doing,
Starting point is 00:05:11 we had some of our research teams deliver some of the very first toolkits to help detect bias years ago now, right, and deployed them to open source. And we have put into place for our developers here at IBM an Ethics by design playbook that's a sort of a step-by-step approach, which also addresses very fully biased considerations. And we provide not only like here's a point when you should test for it and you consider
Starting point is 00:05:39 it in the data, you have to measure it both at the data and the model level or the outcome level. And we provide guidance with respect to what tools can best be used to accomplish that. So it's a really important issue. It's one you can't just talk about. You have to provide essentially the technology and the capabilities and the guidance to enable people to test for it. Recently you had this wonderful opportunity to head to Congress to talk about AI.
Starting point is 00:06:04 And in your testimony before Congress, you mentioned that it's often said that innovation moves too fast for government to keep up. And this is something that I also worry about as a psychologist, right? Are policymakers really understanding the issues that they're dealing with? And so I'm curious how you're approaching this challenge of adapting AI policies to keep up with the sort of rapid pace of all the advancements we're seeing in the AI technology itself. I think it's really critically important that you have foundational principles that apply to not only how you use technology, but whether you're going to use it in the first place
Starting point is 00:06:38 and where you're going to use it and apply it across your company. And then your program from a governance perspective has to be agile. It has to be able to address emerging capabilities, new training methods, et cetera. And part of that involves helping to educate and instill and empower a trustworthy culture at a company. So you can spot those issues. So you can ask the right questions at the right time. If you try, we talked about during the Senate hearing and IBM's been talking for years about regulating the use, not the technology itself, because if you try to regulate technology, you're very quickly going to find out regulation will absolutely never keep up with that. And so in your testimony to Congress,
Starting point is 00:07:22 you also talked about this idea of a precision regulation approach for AI. Tell me more about this. What is a precision regulation approach and why could that be so important? It's funny because I was able to share with Congress our precision regulation point of view in 2023, but that precision regulation point of view was published by IBM in 2020. We have not changed our position that you should apply the tightest controls, the strictest regulatory requirements to the technology where the end use and risk of societal harm is the greatest. That's essentially what it is. There's lots of AI technology that's used today that doesn't touch people, that's very low risk in nature.
Starting point is 00:08:07 And even when you think about AI that delivers a movie recommendation versus AI that is used to diagnose cancer, right? There's very different implications associated with those two uses of the technology. And so essentially what precision regulation is, is apply different rules to different risks, right? More stringent regulation to the use cases with the greatest risk.
Starting point is 00:08:31 And then also we build that out calling for things like transparency. You see it today with content, right? Misinformation and the like. We believe that consumers should always know when they're interacting with an AI system. So be transparent, don't hide your AI. Clearly define the risks. So as a country, we need to have some clear guidance, right? And globally as well, in terms of which uses of AI are higher risk, where we'll apply higher and stricter regulation and have sort of a common understanding of
Starting point is 00:09:05 what those high-risk uses are, and then demonstrate the impact in the cases of those higher-risk uses. So companies who are using AI in spaces where they can impact people's legal rights, for example, should have to conduct an impact assessment that demonstrates that the technology isn't biased. So we've been pretty clear about apply the most stringent regulation to the highest risk uses of AI. And so, so far we've been talking about your congressional testimony
Starting point is 00:09:37 in terms of the specific content that you talked about, but I'm just curious on a personal level, what was that like? Like right now it feels like at a policy level, there's a kind of fever pitch going on with AI right now. What did that feel like to kind of really have the opportunity to talk to policymakers and sort of influence what they're thinking
Starting point is 00:09:54 about AI technologies in the coming century perhaps? I was really an honor to be able to do that and to be one of the first set of invitees to the first hearing. And what I learned from it essentially is really two things. The first is really the value of authenticity. So both as an individual and as a company, I was able to talk about what I do. I need a lot of advanced prep.
Starting point is 00:10:23 I talked about what my job is, what IBM has been putting in place for years now. So this isn't about creating something. This was just about showing up and being authentic. And we were invited for a reason. We were invited because we were one of the earliest companies in the AI technology space. We're the oldest technology company and we are trusted. And that's an honor. And then the second thing I came away with was really how important this issue is to society. I don't think I appreciated it as much until following that experience, I had outreach from colleagues I hadn't worked with for years. I had outreach from family members
Starting point is 00:11:05 who heard me on the radio. You know, my mother and my mother-in-law and my nieces and nephews and my friends of my kids were all like, oh, I get it. I get what you do now. Wow, that's pretty cool. You know, so that was really probably the best and most impactful takeaway that I had.
Starting point is 00:11:22 The mass adoption of generative AI happening at breakneck speed has spurred societies and governments around the world to get serious about regulating AI. For businesses, compliance is complex enough already, but throw an ever-involving technology like AI into the mix and compliance itself becomes an exercise in adaptability. As regulators seek greater accountability in how AI is used, businesses need help creating governance processes comprehensive enough to comply with the law but agile enough to keep up with the rapid rate of change in AI development. Regulatory scrutiny isn't
Starting point is 00:12:04 the only consideration either. Responsible AI governance, a business's ability to prove its AI models are transparent and explainable is also key to building trust with customers, regardless of industry. In the next part of their conversation, Lori asked Christina what businesses should consider when approaching AI governance. Let's listen. So what's a particular role that businesses are
Starting point is 00:12:31 playing in AI governance? Like why is it so critical for businesses to be part of this? So I think it's really critically important that businesses understand the impacts that technology can have both in making them better businesses, but the impacts that those technologies can have on the consumers that they're supporting. Businesses need to be deploying AI technology that is in alignment with the goals that they set for it, and that can be trusted. I think for us and for our clients, a lot of this comes back to trust in tech. If you deploy something that doesn't work, that hallucinates, that discriminates,
Starting point is 00:13:13 that isn't transparent, where decisions can't be explained, then you are going to very rapidly erode the trust at best, right, of your clients. And at worst for yourself, you're gonna create legal and regulatory issues for yourself as well. So trusted technology is really important. And I think there's a lot of pressure on businesses today to move very rapidly and adopt technology. But if you do it without having a program of governance in place, you're really risking eroding that trust.
Starting point is 00:13:42 And so this is really where I think a strong AI governance comes in. Talk about from your perspective how this really contributes to maintaining the trust that customers and stakeholders have in these technologies. Yeah, absolutely. I mean, you need to have a governance program because you need to understand that the technology,
Starting point is 00:13:59 particularly in the AI space, that you are deploying is explainable. You need to understand why it's making decisions and recommendations that it's making and you need to be able to explain that to your consumers. I mean, you can't do that if you don't know where your data is coming from, what data you're using to train those models. If you don't have a program that manages the alignment of your AI models over time to make sure as
Starting point is 00:14:24 AI learns and evolves over uses, which is in large part what makes it so beneficial, that it stays in alignment with the objectives that you set for the technology over time. So you can't do that without a robust governance process in place. So we work with clients to share our own story here at IBM in terms of how we put that in place, but also in our consulting practice to help clients work with these new generative capabilities and foundation models and the like
Starting point is 00:14:58 in order to put them to work for their business in a way that's going to be impactful to that business, but at the same time be trusted. So now I wanted to turn a little bit towards WatsonX governance. And so IBM recently announced their AI platform, WatsonX, which will include a governance component. Could you tell us a little more about WatsonX.Governance? Yeah, I mean, before I do that, I'll just back up and talk about the full platform and then lean into WatsonX, because I think it's important to understand the delivery of a full suite of capabilities
Starting point is 00:15:32 to get data, to train models, and then to govern them over their lifecycle. All of these things are really important. From the onset, you need to make sure that you have, for our WatsonX.ai, for example, that's the studio to train new foundation models and generative AI and machine learning capabilities. And we are populating that studio with some IBM trained
Starting point is 00:16:02 foundation models, which we're curating and tailoring more specifically for enterprises. So that's really important. It comes back to the point I made earlier about business trust and the need, you know, to have enterprise ready technologies in the AI space. And then the WatsonX.data is a fit for purpose data store or a data lake. Then WatsonX.gov, so that's a particular component of the platform that my team and
Starting point is 00:16:33 the AI Ethics Board has really worked closely with the product team on developing. We're using it internally here in the Chief Privacy office as well to help us govern our own uses of AI technology and our compliance program here. And it essentially helps to notify you if a model becomes biased or gets out of alignment as you're using it over time. So companies are going to need these capabilities. I mean, they need them today to deliver technologies with trust. They'll need them tomorrow to comply with regulation, which is on the horizon. And I think compliance becomes even more complex when you consider international data protection laws and regulations.
Starting point is 00:17:17 Honestly, I don't know how anyone on any company's legal team is keeping up with this these days. But my question for you is really, how can businesses develop a strategy to maintain compliance and to deal with it in this ever-changing landscape? It's increasingly more challenging. In fact, I saw a statistic just this morning that the regulatory obligations on companies have increased something like 700 times in the last 20 years.
Starting point is 00:17:43 So it really is a huge focus area for companies. You have to have a process in place in order to do that. And it's not easy, particularly for a company like IBM, that it has a presence in over 170 countries around the world. There is more than 150 comprehensive privacy regulations. There are regulations of non-personal data. There are AI regulations emerging. So you really need an operational approach to it
Starting point is 00:18:14 in order to stay compliant. But one of the things we do is we set a baseline. And a lot of companies do this as well. So we define a privacy baseline. We define an AI baseline. And we ensure then as a result of that, that there are very few deviances because it incorporates in that baseline. So that's one of the ways we do it. Other companies, I think, are similarly situated in terms of doing that. But again, it is a real challenge for global companies. It's one of the reasons why It is a real challenge for global companies. It's one of the reasons why we advocate for as much alignment as possible
Starting point is 00:18:47 on the international realm, as well as nationally here in the U.S., as much alignment as possible to make compliance easier, and not just because companies want an easy way to comply, but the harder it is, the less likely there will be compliance. And it's not the objective of anybody,
Starting point is 00:19:11 governments, companies, consumers, to have to set legal obligations that companies simply can't meet. And so what advice would you give to other companies who are looking to rethink or strengthen their approach to AI governance? I think you need to start with, as we did, foundational principles. And you need to start making decisions about what technology you're going to deploy and
Starting point is 00:19:32 what technology you're not. What are you going to use it for and what aren't you going to use it for? And then when you do use it, align to those principles. That's really important. Formalize a program. Have someone within the organization, whether it's the chief privacy officer, whether it's some other role, a chief AI ethics officer, but have an accountable individual
Starting point is 00:19:54 and accountable organization. Do a maturity assessment, figure out where you are and where you need to be, and really start, you know, putting it into place today. Don't wait for regulation to apply directly to your business because it'll be too late. So Smart Talks features new creators, these visionaries like yourself who
Starting point is 00:20:14 are creatively applying technology and business to drive change. I'm curious if you see yourself as creative. I definitely do. I mean, you need to be creative when you're working in an industry that evolves so very quickly. I started with IBM when we were primarily a hardware company, and we've changed our business so significantly over the years,
Starting point is 00:20:40 and the issues that are raised with respect to each new technology, whether it be Cloud, whether it be AI now, where we're seeing a ton of issues or you look at emergent issues, in the space of things like neurotechnologies and quantum computers. You have to be strategic and you have to be creative in thinking about how you can adapt agilely, quickly, a company to an environment that is changing so quickly. And with this transformation happening at such a rapid pace, do you think creativity plays a
Starting point is 00:21:16 role in how you think about and implement specifically a trustworthy AI strategy? Yeah, I absolutely think it does. again it comes back to these capabilities and there are ways I guess how you define creativity could be different, right? But I'm thinking of creativity in the sense of sort of agility and strategic vision and creative problem solving. I think that's really important in the world that we're in right now being able to creatively problem solve with new issues that are rising sort of every day
Starting point is 00:21:52 And so how do you see the role of chief privacy officer evolving in the future as AI technology continues to advance? Like what steps should CPOs take to stay ahead of all these changes that are coming their way? So the role is evolving. In most companies, I would say pretty rapidly. Many companies are looking to chief privacy officers who already understand the data that's being used in the organization and have programs to ensure compliance with laws that require you
Starting point is 00:22:22 to manage that data in accordance with data protection laws and the like, it's a natural place and position for AI responsibility. And so I think what's happening to a lot of chief privacy officers is they're being asked to take on this AI governance responsibility for companies. And if not take it on, at least play a very key role working with other parts of the business in AI governance. So that really is changing. And if chief privacy officers are in companies who maybe haven't started thinking about AI yet, they should. So I would encourage them to look at different resources that are available already in AI governance
Starting point is 00:23:03 space. For example, the International Association of Privacy Professionals, which is the 75,000 member professional body for the profession of chief privacy officers just recently launched an AI governance initiative and an AI governance certification program. I sit on their advisory board, but that's just emblematic of the fact that the field is changing so rapidly. And so, you know, speaking of rapid change, when you were back here on Smart Talks in 2021,
Starting point is 00:23:35 you said that the future of AI will be more transparent and more trustworthy. You know, what do you see the next five to 10 years holding? You know, when you're back on Smart Talks in, you know, 2026, you know, 2030, you know, what are we going to be talking about when it comes to AI technology and governance? So I try to be an optimist, right? And I said that two years ago, and I think we're seeing it now come into fruition. And there will be requirements, whether they're coming from
Starting point is 00:24:03 the US, whether they're coming from the US, whether they're coming from Europe, whether they're just coming from voluntary adoption by clients of things like the NIST risk management framework, really important voluntary frameworks. You're going to have to adopt transparent and explainable practices in your uses of AI. So I do see that happening. And in the next five to 10 years, boy, I think we'll see more research into trust in techniques because we don't really know, for example, how to watermark. We're calling for things like watermarking. There'll be more research into how to do that. I think you'll see regulation that's specifically going to require those types of things.
Starting point is 00:24:45 So I think, again, I think the regulation is going to drive research. It's going to drive research into these areas that will help ensure that we can deliver new capabilities, generated capabilities and the like with trust and explainability. Thank you so much, Christina, for joining me on Smart Talks to talk about AI and governance. Well thank you very much for joining me on Smart Talks to talk about AI and governance. Well, thank you very much for having me. To unlock the transformative growth possible with artificial intelligence, businesses need to know what they wish to grow into first. Like Christina said, the best way forward in the AI future is for businesses to figure out their
Starting point is 00:25:22 own foundational principles around using the technology, drawing on those principles to apply AI in a way that's ethically consistent with their mission and complies with the legal frameworks built to hold the technology accountable. As AI adoption grows more and more widespread, so too will the expectation from consumers and regulators that businesses use it responsibly. Investing in dependable AI governance is a way for businesses to lay the foundations for technology that their customers can trust, while rising to the challenge of increasing regulatory complexity.
Starting point is 00:26:02 Though the emergence of AI does complicate an already tough compliance landscape, businesses now face a creative opportunity to set a precedent for what accountability in AI looks like and rethink what it means to deploy trustworthy artificial intelligence. I'm Malcolm Gladwell. This is a paid advertisement from IBM. Smart Talks with IBM will be taking a short hiatus, but look for new episodes in the coming weeks. Smart Talks with IBM is produced by Matt Romano, David Jha, Nisha Venkat, and Royston Buzerve
Starting point is 00:26:39 with Jacob Goldstein. We're edited by Lydia Jean Cot. Our engineer is Jason Gambrale, theme song by Gramascope. Special thanks to Carly Migliori, Andy Kelly, Kathy Callahan, and the 8Bar and IBM teams, as well as the Pushkin marketing team. Smart Talks with IBM is a production of Pushkin Industries and Ruby Studio at iHeart Media. To find more Pushkin Industries and Ruby Studio at iHeart Media. To find more Pushkin podcasts, listen
Starting point is 00:27:06 on the iHeart Radio app, Apple Podcasts, or wherever you listen to podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.