The Bridge with Peter Mansbridge - Should Artificial Intelligence Be Regulated?

Episode Date: March 6, 2023

This is an important discussion about Artificial intelligence -- what you should know and why legislators seem so slow-footed on dealing with the issues AI brings.  Conservative MP Michelle Rempel ...Garner is trying to raise awareness on something that is happening very quickly in our world, perhaps too quickly.

Transcript
Discussion (0)
Starting point is 00:00:00 And hello there, Peter Mansbridge here. You are just moments away from the latest episode of The Bridge. Artificial intelligence, what do you know? Is it not very much? Well, if so, don't feel bad. Just look at government. A primer another week. Peter Mansbridge here in Stratford, Ontario. And yeah, we're going to deal with a bit of a little heavy today. AI, something we haven't talked about. Well, we have kind of talked about it on referencing it every once in a while. But today we're going to dig a little deeper and you should care about this. And I think you will enjoy this conversation I'm about to have. But first, I got to tell you a little story about what happened to me on the weekend. You know, on,
Starting point is 00:01:00 as the weekend started, at least here in southwestern Ontario and around the Stratford area, it was pretty ugly. A lot of snow. You know, it melted pretty fast, or it is in the process of melting pretty fast. But there was a lot of snow on Saturday morning. It was the kind of day where you figure, you know what, I'm not going out. I'm going to do the odd jobs. I'm going to look around for something to either watch or read. And as it turned out, I started flipping through some documentaries on one of the streaming
Starting point is 00:01:34 services to see whether there was something I might find interesting. Now, you know, as I bored you with before, I'm a kind of a child of the 60s and the early 70s and one of my big music obsessions was the band. I loved the band. I just loved the sound of the band. Still do. And so I was suddenly saw this documentary called Once Were Brothers listed on one of the doc pages. And you know what?
Starting point is 00:02:11 I hadn't heard of it before, which is my fault. I should have heard it, especially as somebody who follows the band. You know, The Last Waltz is one of my favorite movies, documentaries, call it what you want, ever. And Winsford Brothers is really the story of Robbie Robertson. It's his story and his version of what happened to the band, how it started, how it made its name, how it became so popular, how it made its name, how it became so popular, how it broke up.
Starting point is 00:02:49 And so I watched this. It's a film directed by somebody I've known for more than 40 years, Peter Raymond, White Pine Productions. Good guy. I wouldn't say a friend. We're not that close, but we're certainly acquaintances. And, you know, brilliant in his own right on producing and directing films.
Starting point is 00:03:17 So I thought, well, I can't believe I hadn't seen this, but I'm going to watch it. Well, I watched it and thoroughly enjoyed it. Highly recommended. Really good film with some great parts in it that I just never knew existed. Anyway, one of the, you know, we know all the members of the band, but one who is of particular interest for somebody who lives in Stratford was Richard Manuel.
Starting point is 00:03:51 Richard is a pianist, keyboard player, and a great singer. Richard had not an uncomplicated life. He grew up here in Stratford. You know, he sang in the choir, was involved in music things at school, and then made his way off into music history. But the uncomplicated part of his life involved drugs and liquor and just, there were some ugly times. And they ended very illogically in the mid to late 80s when he committed suicide.
Starting point is 00:04:44 Richard is buried here in Stratford at the Avondale Cemetery. And I've been, you know, I remember one weekend a few years ago when Willie and I had watched the last waltz, me for the like 50th time. And I said, you know, one of the guys from Stratford, and he's buried here at the cemetery. And we went out the next day. It was a nice, warm day, summery day. And we went out, and we found the marker,
Starting point is 00:05:19 a very plain marker, not a stone. Well, it's stone, but not a sand, a wrecked stone. It was just one that's placed in the ground. And we stood there for a moment or so and remembered the band and Richard Manuel. So given that as the background, given the fact that I never intended to watch this film on that day, it was an accident. I just sort of bumped into it.
Starting point is 00:05:52 And, you know, I hadn't been thinking of Richard Manuel, but I was on that day. It just happened, right? So here's why I'm telling that story. Saturday was March 4th. If you look up Richard Manuel, when did he die? He died on March 4th. Now, is that coincidence?
Starting point is 00:06:25 Is that just like coincidence that led me to watching that film and thinking of him and thinking of his marker in the ground here in Stratford? I guess so. Just coincidence. Anyway, I tell that story for whatever it's worth. Now, on to artificial intelligence. So I want to have this discussion. I've thought about it a number of times over the past couple of months.
Starting point is 00:06:52 How do I have this? Do I bring some tech wizard in to talk about it? Or is there another way? Is there another hook to this story? Well, I found it. Thanks to Michelle Rempel-Garner, the Conservative Member of Parliament from Calgary. Now, I've talked to Michelle Rempel-Garner a number of times over the years,
Starting point is 00:07:20 and she's always been extremely accommodating for me and for my time on a variety of different issues. But a couple of weeks ago, she wrote in her Substack piece, along with Gary Marcus, who's a tech wizard, a story about AI and why we should care and why we should really try to understand what it is instead of just use the term AI and assume everybody knows what we're talking about. So let's have that conversation now. I think you'll find it interesting. Michelle Rempelgarner, conservative MP from Calgary knows Hill. Here we
Starting point is 00:08:05 go. So I'm going to begin with the title of your, your Substack piece. I guess it's a couple of weeks ago now, but it's, it caused a lot of discussion, which is good. That's what you're trying to provoke. But the title was, is it time to hit the pause button on AI? Now, the use of the word pause in that headline clearly indicates that you're not against AI. Your issue is, is it being used the right way? Or are there too many ways it's not being used the right way that we need to consider what we're doing here? Would that be a fair assessment? I think so. You know, I think a good place to start is that when people think of artificial intelligence, certainly, you know, people of my vintage and maybe older, you know, you think about like Terminator or science fiction, right? And it's, there's,
Starting point is 00:08:57 there's this sort of stigma around talking about it. Like, ah, this, there's nothing for government to get involved in here. Everything's fine. But the reality is, is that over the last few years, we've seen massive advancements in the technology that powers different types of AI. And there's been a lot of good about the advancements having harm or potentially having harm to humans at this point. And because this technology is operating and being developed in a situation without really any sets of rules, and we're leaving it to big tech companies to sort that out. I think that there have been some incidents recently, as I outlined in my piece, that have really led people, ethicists, across political stripe to say, wait a second, what's going on here? And maybe we should have a discussion about whether it's ethical to deploy these types of technologies that could have an actual detrimental effect to humans, not just the economy, but like an actual detrimental effect to humans, not just
Starting point is 00:10:05 the economy, but like an actual detrimental effect on human behavior, mental health, other things without having those guardrails in place. And that's really what the article was supposed to do is to say, like, as you said, spur discussion on where should, where should government be? And maybe we do need to hit the pause button given how quickly things are unrolling and the lack of guardrails well i want to get to some of the examples you mentioned in a sec but i i i want to preface it with you know you're a conservative right and to hear you say should government be more involved in this or involved at all in this? Because they don't seem to be involved at all right now. Is that a hard leap for you to make? Or is this like a special case?
Starting point is 00:10:53 You know, in my time in office, I've seen disruptive technological advances occur where parliament and political parties don't take policy positions on an issue until after consequences have happened and i don't like the piece was not necessarily to take a prescriptive policy position but to say whoa you know what is the role of government in this position and that's that should be that shouldn't be hard for any legislator to do. I think just asking questions, you know, really pointing out that there are some risks so that political parties can start looking at the issues and saying, well, what what what are our policy positions on these issues? I think that's something every legislator should do, regardless of political stripe.
Starting point is 00:11:44 So, no, that wasn't hard. But at the end of the day, I think this is something that in very short order, you are going to see political parties have to have to take positions on because of the massive implications that it has both for human health and frankly, the economy. Is it true that there hasn't been any discussion on AI that you're aware of in the House? The government has put, I think, a very broad and sort of early stage piece of legislation forward that kind of touches on some things,
Starting point is 00:12:22 but it doesn't really address this new paradigm of AI is like, like the foundational AI technology behind things like chat, GPT, large language models, for example. And I do think it's worthy of parliamentary study. I want to give a shout out to my colleague, Colin Deacon in the Senate, who sort of is of the same opinion that there is a role for parliamentary committees to study these types of issues. I know they're on top of that, but it really, no, I would say, you know, compared to other issues, this has gotten a lot, not a lot of attention at all. And it's probably because it's just not front of mind for the electorate right now. And also because there's a big lobby behind this this type of technology too right um so certainly something is that apparent on the is that apparent on the
Starting point is 00:13:13 hill the lobby um or is it more subtle i think you see it more in the american on the american side right now um and certainly in the in the European Union, the European Union is considering a bill or they're they're crafting a major piece of legislation around AI use and development. And Canada, you know, sometimes is behind those markets because we are a smaller market by virtue of size of population. But I expect that to to certainly ramp up, particularly as parliamentarians start turning their attention to this issue for sure is it would it be a lack of will or a lack of knowledge i think a lack of knowledge i i don't think there's malice on this and also like frankly
Starting point is 00:13:57 there's a lot of you know as you discuss on your podcast a lot there's a lot a lot of other front burner issues right now uh that are seizing Parliament's attention. And, you know, in Parliament, oftentimes you're seized with issues that are short-term issues. I think that this is one of those issues that might not be on the front burner right now, but it has a longer term impact. And again, like there are sort of two baskets of issues here. There is one where, you one where I see a lot of positivity on the economy. There's a lot of positive economic impacts that this can and will have. But then there's this whole other aspect of how does this impact humans? That's why I gave the analogy of regulating pharmaceuticals in my piece that I don't think that parliament
Starting point is 00:14:45 has paid attention to yet, but will have longer term impacts for sure. Okay, take me down that path, the concern path, where you see the potential for real problems. Sure, so an example that, examples that have happened in the last couple of weeks.
Starting point is 00:15:02 So Microsoft, as I outlined in my piece, they released a new version of their search engine, Bing, that had a chatbot that was powered by these, this new generation of technology, these large language models attached to it. And immediately there were incidences where people were interacting with this chatbot. And it's, and I have to just underscore, this is not like what most people think about with regard to chat box. This is a technology that has the capacity to parse language, parse huge, you know,
Starting point is 00:15:37 sort of unfathomable size data sets and come up with intelligent responses. So there are people were interacting with this chat bot and, you know, um, uh, New York times journalist talked about how it told him to divorce his wife. Um, there's a lot of misuse of information. There's a lot of concerns that this could be used to, uh, produce, you know, really, um, really believable deep fakes, uh, that it could be used to provide,
Starting point is 00:16:07 to manipulate humans into behavior that, you know, it could have a lot of bad impacts. And there's examples of this. And Microsoft saw this. There was reports that Microsoft had reports of this during the development of this, and they chose to release it anyways. So that's a concern, right? And I know that some people will listen to this and go like, oh, who cares? It's a chatbot. But when you think about, you know, somebody who, let's say, in one example is perhaps not, you know, struggling with a mental health concern,
Starting point is 00:16:41 and you have an untested chatbot that might not give it great advice, that's a concern. I have concerns about, you know, we're in the middle of a lot of concerns about foreign actors influencing our political process in Canada, state actors using artificial intelligence to create really believable profiles on social media and manipulating people into behavior that acts against the national interest. Those are like two very small examples of the paths that we should be considering as legislators. And the fact that we're not right now, it just, it keeps me up at night. Maybe I'm wrong. Maybe there's nothing to be concerned about here, but certainly experts from across political stripe and brighter minds than mine have been raising this for some
Starting point is 00:17:32 time now. And I think we have a duty to respond. As you mentioned, there've been a lot of examples just in the last couple of weeks, last month or so of these potential, well, it's more than potential, these negative influences on the way AI is being used. And I think, you know, for some of us, me included, who are kind of new to this game, what we have to keep reminding ourselves is we're not necessarily talking about people that are doing this. It's computers that are doing this. That's such a great point.
Starting point is 00:18:05 Well, drive it home for me. Well, like, and I'm sorry to interrupt you. I just, that is exactly it. Like this is the technology behind these new systems. Some experts say that they're concerned because they don't actually know how it works. Right. And these systems, again, I'm a, my background's in economics. I'm not a computer scientist. experts say that they're concerned because they don't actually know how it works right and these systems again i'm a my background's in economics i'm not a computer scientist but these these these
Starting point is 00:18:33 models are trained on data sets that we don't when the public's interacting with these interfaces they don't know what it's been trained on um and and to your, this is not a human talking to a human. This is not a human that's subject to rules talking to another human. This is computer that is, you know, this is technology that is rapidly approaching. It's not there yet, but we're getting to a point where a lot of experts say sometime in the future, this technology could exceed human cognitive capacity. And then what, right? If there aren't safeguards in place, and it's what we call, or the experts call AI alignment, where this technology is developed with guaranteed safeguards to ensure that it acts in a human's best interest. If that's not there, there's, you know, there's
Starting point is 00:19:27 big concerns and that's entirely different to your point than a human driving a fake account on Twitter. You know, there's, there's accounts on Instagram, Peter, where it's images that are generated by AI of a person that doesn't exist, but it's interacting with the public like it's a person. And it's hard to discern. It's becoming very hard to discern whether or not it's a real person or not. And it has the capacity of communicating with humanity. That's an entirely different kettle of fish than what we've been dealing with before and you know it's just i think it's something we should be talking about for sure and it's not just a a figure that uh is is a is an unknown or it doesn't represent anybody in particular
Starting point is 00:20:20 it's also figures who are known that have been manipulated by ai to be saying things and doing things and trying to influence people on certain issues known figures and it looks 100 real absolutely i mean the i'm sure people listening will be familiar with the term deep fake. So, you know, AI now it can be AI generated videos of public figures that are doing and saying things. And it looks really real. And that technology is advancing every day. You know, to me, even if we're not necessarily like if somebody wants to park the government regulation discussion, how are we teaching the public that this is this is out there? How are how what tools do does humanity need to understand this information that we're being presented with and make good decisions. I think it would be hard to argue that we got it right with the deployment of social media. Now we're adding this entirely different context where you have technology interacting
Starting point is 00:21:40 with humans and with the ability to parse language, having been trained on data sets that use the last 10 years of politics, Peter. So I don't know. I hope that more legislators start talking about this. Well, listen, you know, as you say, it's moving so rapidly and what they're doing already, what we've witnessed just in the last little while, you can only imagine the impact that can have on an election campaign. Absolutely.
Starting point is 00:22:10 They better get their act in gear pretty fast. The idea was an AI that was kind of a step behind human intelligence in terms of how fast it could react, how smart it could be, basically. And then the next step of AI was kind of on a level playing field with human intelligence. And now we're into this kind of super super ai which could be as much as a trillion times smarter than the human side and it's happening that quickly that you know pause geez pause may not be fast enough you know to get hold of this so you're talking about something that that people refer to as strong ai or or generative intelligence so that is you know when you when you hear people talking about an artificial intelligence that exceeds human cognitive capacity that's those are the terms that are applied. Arguably, we're not there yet. And the timeline for getting there is a matter of debate. But what's interesting is over the last few years,
Starting point is 00:23:37 you'd be hard pressed to find an expert now to say that we're never going to get there. And that's a big change. There are a lot of people now with the release of these large language model-based AIs that are saying that we're getting closer. And the other thing is there's a lot of investment now. Like if you look at how much capital is being poured into AI research and also how this is now being a factor
Starting point is 00:24:02 in sort of geopolitics and trade, I think that that's something that we have to prepare for that eventuality. We'd be crazy not to, frankly. And I think it's also really important for us to be precise in what we're talking about. So we have this challenge that I wouldn't classify chat GPT or large language model AI as strong language or strong AI yet. But it's the reality is it's like chat GPT, I think, passed some medical exams. I think it passed the MBA test. So it's doing things that humans do. And, you know, you and I could get into a long rabbit trail about, you know, what is consciousness, what is cognition. But at the end of the day, this is, as I've written about the,
Starting point is 00:24:53 you know, we've never been faced with the reality of having to deal with something that is, or interact with something that is perhaps greater than human capacity in terms of cognition or intelligence. This is on the horizon. So what does that mean? And, you know, parliament isn't, isn't great with dealing with sometimes with really minor mundane challenges. That's a big philosophical question that people would even argue around the parameters of and rightly so um but we better get we better get with it because it's it's coming um and i just it worries me that's not even on the table right now for discussion well it's coming just like the proverbial train coming down the
Starting point is 00:25:43 tracks so they've got to get out at some point. We're going to take a quick break. We'll be back with Michelle Rempel-Garner and the topic of AI right after this. And welcome back. You're listening to The Bridge right here on Sirius XM, channel 167, or on your favorite podcast platform. Our guest, Michelle Rempel-Garner, the conservative MP from Calgary, knows Hill.
Starting point is 00:26:17 That's right, isn't it? That's right. Okay. Let me ask it this way, because I'm sure that some of our listeners are sitting there saying, well, this is really interesting, but you know what? It doesn't really apply to me. I don't go online looking for this stuff. I just use, you know, my social media base for, you know, talking with my friends and that's kind of it. And then I kind of follow different news formats on, on Twitter. But, you know, this is interesting, but I'm not going to worry about it. Why should they be worried about it? Well, I think that there's, you know, let's,
Starting point is 00:26:55 let's start with the acknowledgement that AI is being used in our economy and in society massively right now. Right. Like we're seeing a lot of AI advancement and in good positive ways. You know, there's a lot of deep learning models that are being deployed in the medical field to help doctors with diagnosis or diagnostics, for example. There's examples. I mean, I was sitting in a committee well where we heard that the citizenship or department of citizenship and immigration was
Starting point is 00:27:30 using ai or sorry to use ai to screen uh applicants to come into the country um there's a lot of like i i mean i could spend the entire podcast talking about examples of how AI is being deployed right now. This new iteration or this new generation of technology that's sort of powering chat GPT, the Bing search engine, etc. The concern lies in that it is a big step from what we've seen in the past. It's getting closer to that point of exceeding human cognition. And it is also being deployed in the economy really quickly. It's also being, you know, deployed to hundreds of millions of people like the chat GPT tag, if you will, it was, it's been downloaded or not downloaded, used by,
Starting point is 00:28:27 you know, millions and millions of people in a very short period of time. Whereas other types of platforms, like let's say Facebook or Instagram, it took years to get to that level of adoption. This is happening overnight. So when you've got something that could have impacts on, you know, human behavior, particularly health, and potentially, you know, as a conservative, I'm always like this could have a lot of positive impacts for the economy. But I just I my gut says, given what we've seen in the last couple of weeks, that people should be, you know, at least thinking about how how how this technology could could impact their life. And you're going to see it like Peter, if people are hearing about this for the first time, it's not going to be the last. And they're going to be interacting with it even even within the next month two months three months it's just we're early days and I kind of feel like you know you know that whole Cassandra argument in Greek mythology where you say something and no one believes you but I just my gut sense on this says that everybody should be having a
Starting point is 00:29:46 look at this and asking what is the role of government how how are we going to adapt to this is is government the right institution to be making the rules on probably not and i'll tell you why and it's not just my political political proclivity is like in canada our government has had a hard time getting the basics right I'm not saying this is a partisan it's just you know we've had a very hard time in the last few years for sure looking at things like fixing the health care system or addressing you know basic infrastructure you overlay the lack of nimbleness in our government systems the slowness of our bureaucracy and you apply that to a technology that is so rapidly changing it's so
Starting point is 00:30:33 rapidly being deployed and I think that we're going to see major friction there and I understand why people who are working in the space are like well government government's the wrong modality to deal with this it should be corporate ethics and I a lot of people will cringe when saying when you're saying that but think about how long it it takes for government legislation to wind its way through the house even just educating legislators or bureaucrats on the context takes time and I mean that's part of my angst on this is that things are so rapidly changing. And, you know, I don't think that there's a lot of people both, you know, on the political side or potentially within the bureaucracy that are
Starting point is 00:31:18 kind of looking at this from a holistic way and a nonpartisan way that I think, I just don't want us to be in a situation where like, let's say the blockchain space where you have, you know, we had like a major disruption in, in, you know, technology, like around, you know, um, certain types of, of products that came out of there and the government was so far behind, it became politicized and people were hurt. So I'm hoping that government, to your point, recognizes that it probably is an outdated modality to be able to deal with something that's this advanced
Starting point is 00:31:55 and is progressing so quickly, but that doesn't absolve us, the collective us, of our responsibility of at least talking about it and making that happen soon it seemed that it took governments so long and still trying to figure out basic social media issues this one is like makes that look like kindergarten um yeah you know as you acknowledged, certain businesses, corporations, industries, even the media, through different media organizations, have been working on trying to tackle their own rules and regulations around, you know, things like chat GPT and the transparency and ways it can be used.
Starting point is 00:32:43 So it does make you wonder whether it's, like, it's too late for government. They missed, you know, they're still standing at the station, a train left. Yeah. You know, what I've seen, like, and certainly in the last few years, in terms of government intervention on new technology, it's mostly been proposals to regulate in such a way that outdated forms of technology or systems of business can continue to operate. And, you know, I'll stray into slightly partisan territory the um the bill
Starting point is 00:33:27 the the liberal government's bills around um news media so bill c18 um regulating online content bill c11 to me i look at those as responses to lobbies that are like, look, we want to keep our existing modes of business, even though the public has largely moved on. We need to respond to our shareholders and try to squeeze a little bit more value out of it. And that's what the government's managing to with those bills. If we take that approach with this, like I don't even like it's not even in the same universe it's like this is yes it is a disruptive technology yes it is a generative technology I think it's going to disrupt a wide variety of industries in ways that we don't even know yet and and and probably for the good in some maybe not so good in others but again to what you said
Starting point is 00:34:26 earlier that's just so critical is like this is this is not the printing press like I mean it is a disruptive technology in that sense but the the printing press is a tool that humans were using the printing press didn't have or you know humanity wasn't saying that all of a sudden a printing press in five years could print its own, create and print its own books, right? This is technology that experts are saying that at some point in the at least medium term, that's where a lot of experts are saying now, this will exceed the capacity for human cognition. That is something that people need to wrap their brains around that we've never as a, as a society had to wrap our brains around before. And, you know, I, governments, I mean, I, I, yeah. Well, you know, in some ways, isn't this the fear we've always had since computers started to become, you know, a part of our lives? And I was before your time, but not before my time.
Starting point is 00:35:33 But it was the 60s when they started and they were huge, right? They were the computers are as big as a building. But you still had this fear that at some point they were going to take over and there were movies about all that and uh you know we seem to have come to that point where it's not just a movie about the future this is the future we're in it you know it says somebody wrote to me the other day and said ah it's no different than uh you know when when calculators were allowed in schools you know but i guess i guess what you're saying that the calculator is more like the printing press it's not like what we're talking about here yeah and i'm sure people are going to be listening to this peter
Starting point is 00:36:15 and be like oh frempel and mansbridge they're luddites they don't want advancement that's that's not what i'm saying here. Like, you know, the digital revolution, the information technology revolution, it was a explosion in growth that, you know, arguably led to a lot of prosperity for our species, right? I am sure people could argue with that as well, too. But there was a lot of good that came from it. But we're not talking about a simple economic modality transition here to your point to nail it to drive it home as we're talking about we have we are on the cusp potentially of developing something that
Starting point is 00:36:56 thinks for itself and thinks for itself in ways that greatly exceed our capacity for thinking that's way oversimplified. I'm sure AI experts are going to be like, oh, quibble with that. But that's, at the end of the day, that's what boils down to. The timelines are, you know, up for debate, but it's, we're there.
Starting point is 00:37:15 Like we're, this is something to your point that is, this is intelligible. This is, this is, this is, it's within our grasp. And I just, we haven't wrapped our, our, our minds around that. And I think what you said too, is really important. Like, and I feel that even talking to you about this and being on a podcast, I'm like, well, you know, people are going to be like, oh, it's Skynet. The robots are going to take over the world. Um, I do like, like, like this is such an out there question for, for what our frame of reference is. But if this, if we develop technology that can think for itself, essentially,
Starting point is 00:37:55 what's to stop and exceeds our capacity for, for cognition, what's to stop it from manipulating us into behavior that allows it to exceed any guardrails that we may have ostensibly put in place for it to do things that aren't necessarily in our best interest. Right. We've never had to deal with that as a species before. And I just I'm listening to myself talk here and going like, Oh, if I'm listening to myself, do I sound crazy? But this is something that's on the table that, um, legislators have to deal with. And like, I, I want to be optimistic here. I want to say that if, if we think about this and do it right, this is something that benefits our society, that we see a lot of economic growth, uh, you know,
Starting point is 00:38:42 dissolution of social inequity, whatever. But if we do it wrong, I don't know. I just, I want to be able to live with myself to say, at least I tried to talk about this. And you have, and it always comes back to that opening word that we had in this discussion, which was pause. I mean, that's what you're suggesting. The issue, I guess, around pause in the world we live in today,
Starting point is 00:39:09 it's awfully hard to pause. It's awfully hard to get the players who are involved in this to pause. It's just everything moves so quickly. Here's your last question, or my last question. If you could suggest or propose one big thing on this issue, one big policy, what would it be? Education,
Starting point is 00:39:33 first and foremost, I would love all of my colleagues to, you know, across political stripe to just become aware of the advancement of technology that we've seen in a really short period of time. Senator Deakin and I are working on an initiative to do that quickly right now. And, you know, we'll have more to say about that in the next couple of weeks, I hope. But then, you know, just to close and to circle back to the last question on pausing, you know,
Starting point is 00:40:03 the piece that I wrote, I wrote it with an expert in the field, Gary Marcus. You know, we were talking about how, you and I were just talking about how government modality, like government might not be able to respond quickly, but we do have precedent for how we've dealt with new technology that we know has an impact on humans. So this is why I gave the example of the pharmaceutical industry, right?
Starting point is 00:40:27 When people research writ large, publicly funded research impacts, you know, it could have an impact on humans. It has to go through something like a research ethics board, right? We should be asking if big tech companies that are working on this have the same sort of a framework where if they're doing research, like let's let research happen, but maybe deploying the technology before it's shown to be safe is something that we could apply an existing paradigm like a research ethics board or, you know, a clinical trial system essentially, right?
Starting point is 00:41:05 Where you've got like a pharmaceutical, pharmaceutical companies ostensibly have to show that it's safe for human use for its prescribed purpose before and different stages of that before it's deployed. I think that those, you know, are things that government could look at, but maybe I'm totally wrong. But this is why I say, first and foremost, education. No political party, including my own, has, I think, come out and taken positions yet. So I don't want to, you know, you know, tie anybody to that.
Starting point is 00:41:36 But we have to very quickly come up to speed on what the capacity is and what the potential risks are and what what experts are talking about um and i know that that's sort of phase zero but it's something that i i don't think exists right now and that's mostly what i'm calling for we're going to leave it at that uh for this day is a fascinating discussion and uh you know i'm really glad you wrote this piece because it it does make more people aware of what's going on and including me. And we got to get involved and we got to be educated as, as you well say, but the idea I like most of what you said was, was the Rumpelgarner Mansbridge podcast.
Starting point is 00:42:15 Now that's something we've got to figure out how to do. I am in a hundred percent. Thanks very much for this. It's been great to talk to you again. Thank you. Well, there you go. Our conversation with the Conservative MP from Calgary Nose Hill, Michelle Rempelgarner. She mentioned a little while ago that she was working with a senator,
Starting point is 00:42:39 and that senator is Senator Colin Deakin. He's from the Independent Senators Group. He was appointed by Prime Minister Trudeau in, I think it was 2018. He's from Nova Scotia, and he's got a background in, well, not in AI, but involving in a lot of tech stuff and making product development better. So interesting person to be working with Michelle Rempel on all this. So there's a little background for you. And if you're looking for that article, just Google Michelle Rempel Garner and hit the kind of news button, and you'll get a link there eventually
Starting point is 00:43:24 to her sub- Substack piece. Once again, it was titled, Is it Time to Hit the Pause Button on AI? You might just try Googling that, and I'm sure the article will come up. Okay, we've entered that field of AI. I know for those like myself who are total neophytes on this subject. Some of that might have been hard to follow, but we did our best to try to keep it simple and straightforward and what the challenge is right now. And the good and the bad, potentially bad of AI. So it's a fascinating topic and it's one that is going to have an enormous impact on our world
Starting point is 00:44:06 in the years ahead, if not the months ahead. As I said, things move fast. Okay, a quick end bit for you before we wrap up today's edition of The Bridge. Very different. Have you ever, you know, sat on your front porch or on your balcony of your apartment or in your backyard or on the dock of, you know, some lake in the summer, looked up at the sky at night and seen the moon and thought, you know what, I'd really like to know, what time is it on the moon and thought, you know what? I'd really like to know what time is it on the moon?
Starting point is 00:44:51 Be honest, have you ever thought about that? Well, let me tell you. They're thinking about it right now, those who are involved in various lunar missions. They're trying to determine what time is it there? What time zone should we use for the moon? In the past, the time zone that has been attributed to the moon has usually been or has always been whatever the time zone is in the country
Starting point is 00:45:20 that is launching a mission to the moon. Well, that's not exact enough for the space agencies these days. They want something that's much more precise. And so they're talking with each other, trying to sort these things out. But you know what? This is a little article that I found in, well, it was ABC News that put it out. Here's one of the challenges that all of these people face. Because time is different on the moon than it is here.
Starting point is 00:46:00 Did you know this? Clocks run faster on the moon than on Earth. They gain about 56 microseconds each day. Further complicating matters, ticking occurs differently on the lunar surface than in lunar orbit. Aren't you fascinated by this? Come on, you didn't know that. You didn't know that. You've come to the bridge to find these exciting things out. And perhaps most importantly, lunar time will have to be practical for astronauts there, noted the space agency's, one of their
Starting point is 00:46:43 officials. NASA is shooting for its first flight to the moon with astronauts in more than half a century in 2024, with lunar landing as early as 2025. So there you go. You look up at the moon and you say, what time is it up there? Now you know there are greater minds than ours working on the answer to that question. That's it for this day on The Bridge,
Starting point is 00:47:12 your Monday episode, Artificial Intelligence. That was our topic. Tomorrow, Brian Stewart comes by. We've got some interesting questions on Ukraine this week. Starting off with, do you care anymore about Ukraine? After a year, are you sort of okay enough on Ukraine? That's a legitimate question. We'll ask Brian that one tomorrow, as well as many others.
Starting point is 00:47:46 Wednesday, Smoke Mirrors and the Truth. Bruce will be by. The latest on the election interference from foreign entities, in other words, China, will be high on the list, I'm sure. I also want to ask him a question about climate change, which I'm told we don't talk about enough on this program. Thursday, your turn. So get your cards and letters in.
Starting point is 00:48:17 Get them in now. The Mansbridge Podcast at gmail.com. The Mansbridge Podcast at gmail.com. And the random Ranter returns. Quite a few letters on the hydrogen rant he did the other day. On Friday, Chantel's back from Iceland. Hiked across. Thanks again to Susan Delacorte, who was terrific last Friday.
Starting point is 00:48:41 Lots of you wrote in about how much you enjoyed hearing Susan. So she'll be back at some point too. But this week, Chantal returns. So that's it. That's Snapshot of the Week ahead. I'm Peter Mansbridge. Thanks so much for listening. It's been a treat talking to you.
Starting point is 00:48:57 Now, are you sure it's me and not some chatbot? I don't know. Talk to you again in 24 hours

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.