UBCNews - Business - HealthTech AI Governance: How Leaders Navigate Privacy & Structural Challenges

Episode Date: February 24, 2026

Welcome back, everyone. Today we are tackling something that sits on a lot of HealthTech CEOs’ agendas, AI governance. In particular, how leaders in Canada are managing privacy constraints ...and structural challenges while trying to integrate AI at scale. With me is a guest who has spent a lot of time inside these decisions with executive teams. To start us off, when Canadian HealthTech organizations try to scale AI beyond pilots, what are the biggest challenges you are seeing? Augmentr Studio City: Toronto Address: 339 1/2 Main Street Website: https://www.augmentrstudio.com/

Transcript
Discussion (0)
Starting point is 00:00:05 Welcome back, everyone. Today, we are tackling something that sits on a lot of health tech CEO's agendas, AI governance. In particular, how leaders in Canada are managing privacy constraints and structural challenges while trying to integrate AI at scale. With me as a guest who has spent a lot of time inside these decisions with executive teams. To start us off, when Canadian health tech organizations try to scale AI beyond pilots, what are the biggest challenges you are seeing? The first pattern is uncertainty about expectations. Many health care organizations are working in a rapidly evolving AI environment without a single clear governing authority, spelling out what good looks like across all settings.
Starting point is 00:00:49 Leaders are making calls on AI deployment while knowing that privacy guidance, accreditation expectations, and public scrutiny are still moving targets. The second pattern is that AI is often treated as a series of point tools instead of part of a structured decision system. Different departments adopt analytics or automation independently, and executives only see fragments. That makes it hard to answer basic questions such as which AI tools exist in the organization, who approve them and how they are being monitored over time. Right, so they're making decisions in real time, but without a stable frame for how all of this fits together? Exactly, and underneath that, there are practical gaps. In a lot of
Starting point is 00:01:33 In a lot of places, AI tools still go live with limited local validation or structured bias assessment. You see strong vendor claims, maybe some external evidence, but less rigorous testing on the organization's own data and patient mix before full deployment. That is where concerns about patient safety and equity start to show up. Let's stay with that for a moment. When an AI tool is not validated properly in context, what kinds of risks are executives actually taking on? AI systems are not static.
Starting point is 00:02:05 They are sensitive to the data they see in the way practice evolves over time. If a model was developed on one population and then used on a different one, performance can degrade, which people often refer to as model drift. Without ongoing monitoring, you might not notice changes in accuracy until there are near misses or clear safety events. There is also the equity angle. If training or validation data underrepresents certain groups, the tool can perform systematically worse for those patients.
Starting point is 00:02:37 That can reinforce existing disparities, even when everyone involved has good intentions. The risk is not only clinical, it is reputational and operational as well. Mm-hmm, interesting. And then we add the Canadian privacy context on top of that. The mix of federal and provincial laws makes things more complicated. Absolutely, yes. The federal structure means that provincial health privacy laws such as Ontario's
Starting point is 00:03:02 Personal Health Information Protection Act sit alongside broader federal frameworks, and each province has its own specifics. Health tech organizations that operate across jurisdictions or that work with multiple provider partners have to reconcile different expectations about data residency, consent models, and disclosure. On top of that, health systems and vendors are dealing with fragmented data infrastructure. Interoperability is still uneven, so leaders are not only asking, what are we allowed to do with this data for AI,
Starting point is 00:03:35 but also, can we even assemble the data we need in a way that respects all the constraints? It feels less like a single rulebook and more like several overlapping scorecards that have to be reconciled. Huh, which puts a lot of pressure on how they think about protecting patient information. When executives ask you what they need to get right on privacy, where do you start? usually starts with principles rather than legal detail. Specific interpretations belong with privacy and legal counsel, not with advisors. At a high level, organizations need clarity on what data is being used for which AI purpose, how that data is protected in storage and in transit, for example, through encryption and access controls,
Starting point is 00:04:19 who can see what, under which conditions and how that access is logged and reviewed, how incidents are detected, escalated, and communicated. Contracts with vendors then need to reflect that structure. That often means being explicit about permitted uses of data, whether de-identification is required, limits on downstream sharing, and the right to audit or review practices over time. Transparency with patients and partners is also critical,
Starting point is 00:04:47 not just as a compliance item, but as a trust practice. People want to know if and how AI is involved in their care or in decisions that affect them. That focus on structure and transparency sets up the next part of the conversation, which is how governance actually turns these principles into day-to-day decisions. Before we go there, a brief word from our sponsor. This episode is brought to you by Augmentor Studio. Augmentor Studio works with health tech and biotech CEOs to design leadership architectures
Starting point is 00:05:18 and operating systems that can handle AI, clinical, and commercial decisions in the same frame. Instead of adding more meetings, the work focuses on decision forums, escalation paths, and cadences that reduce stall and make complex calls more repeatable. Augmenter does not replace regulatory, legal, or clinical counsel. It helps leadership teams integrate those inputs into a coherent execution system, so innovation, risk, and commercialization move together instead of pulling in different directions. You can learn more at the link in the description. Let's talk about that governance layer. If we build on what you just described about privacy and validation, how do leaders implement governance strategies that actually change how AI is selected, deployed, and monitored?
Starting point is 00:06:07 Smart governance starts with making AI decisions visible. That usually means establishing a cross-functional AI governance group, rather than leaving everything to a single champion or department. A typical structure brings together clinical leaders, IT and security, data and analytics, privacy, and someone who understands procurement and finance. In some organizations, quality and risk leaders are also central. The point is to get buyers and users in the same conversation, the people who sign contracts and carry institutional accountability,
Starting point is 00:06:41 and the people who live with the tools and daily workflows. Okay, so the group becomes a place where both the commercial logic and the clinical reality show up at the same time? Exactly. Once the group exists, the next step is to define what it owns. For example, which categories of AI use cases must come through this forum? What approval criteria applies such as clarity of intended use, local validation, privacy posture, and resource impact? How decisions are documented and how progress is tracked after Go Live. Then you need cadence. Governance that only meets when there is a crisis is not governance.
Starting point is 00:07:22 Monthly or regular sessions with a pipeline view of AI initiatives, from early ideas through pilots to scale deployments, give leadership a way to see where things are stalling or accumulating risk. And that connects to continuous monitoring, right? You mentioned earlier that you cannot just deploy and forget. Oh, absolutely. Monitoring is often where systems are weakest. Effective practice includes defining upfront
Starting point is 00:07:49 what performance indicators matter for a given tool. tool, for example, accuracy, turnaround time, or specific safety markers, setting review intervals that match the level of risk, making it easy for clinicians and staff to flag issues or unexpected behavior and ensuring those signals get back to the governance group, not just to a local project lead. It helps to think of this as an extension of existing quality and safety systems rather than a separate AI track. When AI-related issues flow through familiar channels, people are more likely to use them.
Starting point is 00:08:24 I see. Makes sense. Let's bring in the human element directly. If clinicians and other staff do not understand how AI fits into their work, governance can look good on paper but still fail. How are organizations addressing that? Good question. A well-informed workforce is essential. The organizations that are making progress tend to treat AI literacy as role-specific. For clinicians, Education focuses on where AI is used, what it is and is not intended to do, and how to interpret outputs in the context of clinical judgment. For operational and administrative staff, the emphasis is on data handling, privacy expectations, and when to escalate questions.
Starting point is 00:09:08 For executives and board members, it is about portfolio-level risk trade-offs and how AI connects to strategy and capacity. On top of that, many teams adopt a human-in-the-loop approach for high-level risk. risk use cases, where AI supports decisions but does not replace professional accountability. That combination, clear roles plus oversight, tends to be more robust than simply telling people to use AI carefully. Bias and equity have come up a few times already. When leaders want to take that seriously, beyond generic statements, what does good practice
Starting point is 00:09:41 look like? It starts with questions. When evaluating a tool, whether it is built internally or provided by a very, vendor, leaders can ask, what data was used to train and validate this model? How performance was measured across different demographic or clinical subgroups, whether there are known limitations for certain patient groups. Then there is local testing. Running the tool on your own historical data with attention to populations that matter in your setting
Starting point is 00:10:12 can surface performance gaps early. Governance teams can also request that bias-related metrics be part of regular performance reviews, not a one-time check. Finally, feedback channels matter. If clinicians notice patterns, for example, a tool consistently underperforming for a particular group, they need a clear route to bring that into governance discussions so the issue is visible and can be acted on. For the CEOs listening, who feel that AI is important but also overwhelming, what are the
Starting point is 00:10:43 first concrete actions they can take? I would highlight four. First, map your current AI footprint. Even a simple inventory of where AI is used, who approved it, and who owns it will usually reveal surprises. Second, designate or formalize an AI governance forum. It might start as an extension of an existing digital or clinical governance group, but it needs explicit membership, scope, and cadence. Third, align with privacy and legal leaders on a small set of principles for AI-related data use and vendor
Starting point is 00:11:16 contracts, so you are not reinventing the wheel with every new tool. Fourth, embed AI into your leadership operating system. That means AI is on the agenda of existing executive reviews, portfolio discussions, and risk conversations not treated as a side project. When AI decisions live in the same system as other strategic decisions, it becomes easier to balance ambition with safety and trust. That is a very grounded list. One underlying theme I am hearing is that responsible AI use is not optional if you care about patient safety and institutional credibility.
Starting point is 00:11:54 Agree with that. The organizations that take structured steps now clarifying ownership, governance, and feedback loops tend to be better prepared as expectations from regulators, partners, and the public continue to evolve. They still face uncertainty, but they have a way to process it instead of reacting case by case. That is a great note to end on. To everyone tuning in, remember, AI and healthcare holds tremendous promise, but only when paired with smart governance, transparency, and a commitment to equitable care. Until next time, stay curious and stay informed.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.