Orchestrate all the Things - The EU AI Act effect: Background, blind spots, opportunities and roadmap. Featuring Aleksandr Tiulkanov, AI, Data & Digital Policy Counsel

Episode Date: June 19, 2023

The EU Parliament just voted to bring the EU AI Act regulation into effect. If GDPR is anything to go by, that's a big deal. Here's what and how it's likely to effect, its blind spots, what happ...ens next, and how you can prepare for it based on what we know. Article published on Orchestrate all the Things

Transcript
Discussion (0)
Starting point is 00:00:00 Καλώς ήρθατε στο Αρχιστήριο Όλων των Πράξεων. Είμαι ο Γιώργος Ανατιώτης και θα συνεχίσουμε τις πράξεις μαζί. Στοιχεία για τεχνολογία, δίδα, AI και ΜΜΕ και πώς μπροστά στον άλλον, σύγχρονα με τις πραγματικές. Το ΕΕ-Παραλιαμό μόλις έλεγε για να φέρει το ΕΕ-ΑΙ-ΑΚΤ-ΡΟΥΚΙΛΑΙΣΟΥ. Αν η GDPR είναι κάτι που πρέπει να πάρει, αυτό είναι ένα μεγάλο πράγμα. Αυτό είναι τι και πώς θα έχει επιτυχία, τι θα συμβεί, τι θα συμβεί επόμενο και πώς μπορείτε να προετοιμάσετε happens next, and how you can prepare for it based on what we know. Thank you, George, for inviting me to this podcast. I'm very happy to be here with you.
Starting point is 00:00:34 And as a matter of background, I would say that for a huge chunk of my life, my career, I've been a commercial contracts lawyer, turned more recently into tech lawyer. I mean, thanks to my second legal education, which I was getting at the University of Edinburgh in 2015 to 2018. And then, so I eventually went from working for large transnational corporations on their commercial projects to working on those issues with my business clients, advising them on data protection, and in particular, European data protection. At that point, we had already GDPR upcoming in the draft, but already by 2018, 2018 of course it was necessary to prepare for whoever was getting ready to enter the european market let's say or was going to continue operating on it so we we had uh done a lot of projects which were data protection related and then eventually machine learning related projects as well. So here how I came to know more about AI,
Starting point is 00:02:17 I mean, thanks to my second legal degree and my practical involvement into business projects at that time. And then a little bit later, I got more into the policy side of things and I started working initially in Russia, but then eventually more recently at the Council of Europe on matters relating to digital regulation and regulation of artificial intelligence and all things data. And this is what I'm currently still doing, although not at the Council of Europe anymore, because of the political repercussions which happened to all the Russian nationals you know last year and so I'm currently in Strasbourg working in my private capacity
Starting point is 00:03:17 with clients who are interested in making sure that their operations actually comply with the law. And these are usually operations related to machine learning models development, more generally data protection compliance, all these kinds of stuff, and also some policy-related issues to the extent they usually are concerned with the upcoming EU AI Act and all kinds of digital regulation, which is either already in effect or coming up in Europe primarily. And I am also, of course, post regularly on LinkedIn. I have some following there and I have a newsletter on AI and data and digital policy.
Starting point is 00:04:22 And you might have been following me already as well and I'm sometimes also active on Twitter. Also we're discussing the issues related to digital policy and primarily AI regulation related nowadays. Okay thanks for the intro and I think what you do is sort of on the intersection of AI practice and data practice and laws and regulations. So that's a sort of rare location to occupy and I guess that because of that you must probably have quite demand because this is a topic that's on lots of people's and organizations' agendas these days with the explosion in AI and particularly generative AI as they call it in the last few months. And I also have to say that this is also how I came across your
Starting point is 00:05:27 your work as well so through your LinkedIn presence that you just mentioned and it's what attract my attention and hence I thought that it would be an interesting thing to do to have this conversation but let's let's actually take a step back because well based, based on what you said, I think it may be beneficial to sort of clarify the landscape a little bit and just start by talking about the relationship between those broad two areas that you already started touching upon, so data and AI. And what do we mean when we actually talk about AI? And I think these are also things that, to some extent at least, are also mentioned in some of the regulations that you briefly touched upon.
Starting point is 00:06:17 So GDPR and the EU AI Act. So to start on my part, I think the best way I could possibly describe that relationship would be, and I know it's a bit unorthodox because I'm going to refer to something that I'm not actually able to show in the podcast, so I'll just have to describe it, but I think it's a well-known image and something that people can actually easily find. So today, when most people talk about AI, what they really mean
Starting point is 00:06:52 is machine learning. And actually, even in some cases, a specific subdomain of machine learning called deep learning. And while it's something that I personally don't particularly agree with, I've come to accept it as a fact of life, but I think it's also beneficial that if sometimes we just clarify the boundaries, so to speak. So if you want to visualize it, you can think of the broad area of AI as a big set
Starting point is 00:07:23 that has one subset, which is machine learning. And then within that subset, you also have the smaller subset, which is deep learning. And the reason why people tend to conflate these categories is because most of what's called AI today actually really refers to machine learning and typically this deep learning subset of it. And what machine learning and deep learning have in common, they vary greatly in the particular techniques that they apply, but what all of these techniques have in common is the fact that there is great reliance on data. And so basically,
Starting point is 00:08:00 the common workflow that people adhere to in order to produce machine learning models, which in turn power machine learning systems and applications, is that they need to find good data sets, they need to clean them, and they need to process them through a certain pipeline, apply different types of processing, different types of algorithms you may say, and then in the end comes out a model that they may use to apply new datasets and produce predictions or classifications or whatever it is that the system is designed to do. So in that respect there is a very direct link between the use of data and datasets and the use of AI systems. So I guess my question for you then is, in your experience and the
Starting point is 00:09:00 organizations that you deal with, are those distinctions clear for people? Do they see that relationship? Well, not always, not necessarily, because it depends on from what viewpoint you are trying to approach this issue, let's say of understanding what ai may be may not be and in particular in my domain where i'm most interested in is regulation where it actually makes sense for policy reasons to to apply you apply a more strict technical definition of those
Starting point is 00:09:50 kinds of systems. Or maybe a definition which would not be necessarily strictly technically accurate, but would cover the kind of systems which are concerning or interesting to policymakers in terms of, you know, how and what they are willing to regulate. And I think we are having those kinds of discussions around the definition of what, let's say, the policy makers are interested in regulating all the time since the beginning of those dialogues on what we want, let's say, regulate. And there are different schools of thought, I would say, and the positions on this I think differ
Starting point is 00:10:46 depending on whether you are coming from an industry side let's say or you know policy side which would you know suppose that you might be lobbying for some definition which would be let's say more beneficial to your line of business, right? And from the opposite side, you might be coming where you would be thinking about what might be the elements of defining the definition of the AI system, which we want to regulate. Maybe we just want to consider it as another product, sometimes a consumer product, sometimes, you know, a business product,
Starting point is 00:11:34 but something which is more relevant for, you know, the actual activities which are regulated. And then, of course, there are then different approaches. And some people would be arguing for, let's say, a narrower definition of the AI, and they would be arguing then, they are arguing for, including into the scope of this definition only let's say the machine learning systems perhaps right and some others might be willing to say that actually if we are willing to
Starting point is 00:12:16 set the the requirements for those kinds of systems which are in in their base on the basis basically kind of systems which automating automating some processes which originally used to be done by humans we might be willing to include the rule-based systems like good old-fashioned AI into the scope as well. If we want to regulate AI as a product, as a consumer product, so it may be more sensible in this sense, and some policymakers would say that, to make a definition more broad. But then again, apart from that there is a
Starting point is 00:13:09 concern that we have very different definitions let's say adopted by different policymakers in different countries they might there might not be a sufficient overlap in what each nation considers AI in terms of international commerce, international activities which are now being conducted all over the world, over the internet and other networks, you know, there one might be concerned if the definitions are very different from each other. And so, we have seen this materialize in the fact that, for example, in the draft EU AI Act,
Starting point is 00:13:58 most recently they have come up with the idea to incorporate an OECD definition. So the Organization of Economic Development has come up with this definition, I think, in 2019. And yeah, there are elements which the policymakers in Europe consider, yeah, it might be a good one. And it's more or less internationally aligned to an extent.
Starting point is 00:14:33 And so they have come up with the idea to use this one definition. And it doesn't mean that it's a perfect approach, but it might make sense from a policy perspective. And just to remind those who are not familiar with the OECD definition, it basically says that it's an AI system, which is machine-based. So this is one parameter. And for a given set of human-defined objectives, which is a second criteria, it makes predictions, recommendations or decisions.
Starting point is 00:15:11 This is the third one. Influencing real or virtual environments. That would be a fourth part of the definition. And the fifth one says that AI systems are designed to operate with varying levels of autonomy. So not necessarily fully autonomous systems. It might be partially autonomous systems. But that would still qualify for the purposes of this definition. So this five-pronged definition of OECD now is incorporated in the future European regulation on AI.
Starting point is 00:15:49 And of course, there are others as well. Okay, thank you. Indeed, the fact that there's a growing number of regulatory efforts or frameworks being brought into effect is a concern in terms of alignment and obviously definitions, as you pointed out. And that's one of the next topics that I had lined up for discussion. But before actually going there and doing a little bit of review of current regulatory efforts, I thought a good place to start would be what we call fundamental rights in terms of well, initially data and then also AI. And personally, a way I think about it is how these
Starting point is 00:16:39 rights and potentially obligations as well apply both to organisations, be it companies or public bodies or whatever else, and on individuals. Because I think that we have this kind of two-layered approach. And I think a good example to start examining that would be GDPR, which is a regulation which is probably already known to most people. It's already in effect for a few years, and we already have some experience of how it has played out in practice. So let's start with that. Yeah, sure.
Starting point is 00:17:18 So, of course, there are some provisions in this general data protection regulation, which are directly relevant for the AI systems. And those may relate, of course, let's say, to the legal basis for processing the data, which is obviously needed if you want to be build any kind of you know machine learning model you obviously need data and you need to obtain them this data on some legal basis this is what GDPR says and there are different options and sometimes organizations obtain the personal data, which is often a part of the data set or the whole data set might be personal data, right?
Starting point is 00:18:18 They may obtain this data on the basis of consent, like when they are asking you to to agree or not agree with the fact that they are going to use let's say your data for machine learning right and sometimes this happens sometimes they directly ask you whether you are agreeing to that or not, right? But it's not necessarily the only situation and consent is not the only legal basis would lead to companies, let's say, using contract as a basis. So incorporating some provisions into the terms and conditions which are, let's say, provided for a certain digital product which we are using and then in the contract they will provide this right of theirs to use whatever data is being collected in particular for let's say machine learning purposes, right? And sometimes some organizations may argue that they have, let's say,
Starting point is 00:19:55 a legitimate interest, which is another legal basis under the GDPR, to collect and process data. And sometimes the organizations would argue that, yeah, sometimes they have a legitimate interest in collecting the information, maybe scraping it over the internet, you know, and combining it into datasets and then eventually training their machine learning models like we are currently seeing happening with one of them I mean
Starting point is 00:20:30 many major vendors or developers which operate in the space and of course everybody is now currently pointing in direction of open AI but I mean there are lots of other companies who are doing pretty much the same, so using web scraped data to train their models. Of course, then comes a question of whether and to what extent it is legal to use the data which were web scraped for those purposes? And even if it is legal, so what precautions have to be taken to take into account the interests of those people whose data are being scraped, right? So, because some data might not be accurate, you know, and it's a huge possibility for internet scraped data.
Starting point is 00:21:35 And sometimes they might not be willing to actually, they might not be happy with someone scraping their data and then incorporating them eventually into the machine learning model, which then might spontaneously disclose this information, also in connection with some other information which was not maybe directly disclosed. And, of course, the models which are being built may make inferences. So on combined pieces of information, they may make an inference about a certain personal factor, which was not originally disclosed, but based on the data set, they can say that if A and B is true,
Starting point is 00:22:32 then C is true as well, because we have prior history, and statistically it might be the case. It's a huge issue potentially in terms of, let's say, privacy issues because the people might be really affected, might not be willing to have this information disclosed and may not be happy of someone using it eventually and those inferences in particular. I think this is one of the main issues and very frequently discussed issues with matters relating to GDPR.
Starting point is 00:23:18 But of course, there is another thing there which is often being touched, is also governed by GDPR, is automated decisions. If somebody is being made a subject of an automated decision which is affecting his or her rights, there are certain requirements in place to make sure that those decisions can be, let's say, in certain circumstances, questioned and possibly appealed if there's a legally relevant decision which has been taken and it affects the rights of a certain person, right? There is this element. So these two elements primarily have been already addressed by GDPR.
Starting point is 00:24:16 And the issue is whether the companies are already, you know, applying those rules which are in place correctly and there could always be a discussion around that. So we have had something for some time already which covers that and even before GDPR of course there was a data protection directive in Europe and there was and there is a convention one way to the Council of Europe which covers even more countries
Starting point is 00:24:49 and relates specifically to automated processing of personal data Okay, well I have two two points to comment on what you just heard about GDPR. So first, well, legitimate interest is, you know, it's something which is quite, quite vague and quite broadly defined, if you could even claim that such a thing could be defined. I mean, if, for example, I have a company which is in the business of, well, selling marketing data, for example, I can claim that well, I do have legitimate interest in scraping, for example, people's contact details and creating a registry
Starting point is 00:25:31 and making a business out of that. So there is a sort of conflict there between legitimate interest and well, personal personal data rights, let's say. So that's one thing. And another thing that I wanted to touch upon was, well, you mentioned that GDPR already included the provision about people's rights to be informed about any decisions being made that concern them that are somehow automated. And well, first of all, I have to admit that I didn't know that, but knowing that now, I wonder that how can people actually know
Starting point is 00:26:12 in the first place that they have been subjected to this? Is there some provision that covers this as well? If not, then I think it may be a bit hollow in the sense that, well, yes, maybe you have been subjected to some kind of automated decision, but if you never know, you're never going to appeal. Yeah, sure. Of course, the provisions relating to automated individual decision-making, which also includes profiling, you know, so this
Starting point is 00:26:47 aspect has been also covered since the inception of GDPR. There are ways to make these rules work, and one basis for them to make these rules work is actually that there are other rights for the data subject like every individual whose data are being processed is a data subject. It's how
Starting point is 00:27:17 it's called in the data protection regulation. And actually you and me and everybody else has a right to know whether the company processes his or her information first of all and obviously there are situations where we provide this information to an organization, right? And then we are supposed to know about how our data is being used, right? So when we are providing the information, the regulations require that the data controller, which is the organization which is setting the goals, the purposes for data use,
Starting point is 00:28:06 informs us right from the start how it's going to use it and whether it's going to use, let's say, automated decision-making, right? So you are supposed to know that in advance when you agree to those, let's say, provisions which you enter into if you are entering into a contract with a certain entity, right? Or you may just browse the privacy policy of an organization which you are going to enter into a relationship with. And this way, you may be able to identify
Starting point is 00:28:43 whether they actually use automated decision making or not. This is in the case when they collect information from you directly. And when they do not do that, but they source their information elsewhere they actually have to really at the certain step inform you and this is not related purely to automated decision making but just generally that they have actually a data on file about you and they are going to use this in this in this or other way. And so the GDPR also provides the right of access to the information about that in particular. So there are some circumstances when they have to disclose that upfront. Some situations when having collected the information and if it's practical, this
Starting point is 00:29:48 is a caveat, they will contact you and inform you that they have started data processing and sometimes the law does not require that but requires them to inform you if you're willing to exercise your data access rights. So when you're inquiring an organization, they have to tell you about that. And, of course, the aspects related to automated decision-making is included into the general formula and basically what the GDPR in addition to this general and standard rules which apply in any case what the GDPR says about automated decision making it like the data subject has a right
Starting point is 00:30:40 not to be subject to this automated processing as long as it's fully automated so not to be subject to this automated processing, as long as it's fully automated. So if they are going to use a human and process human in the loop, this is wholly another issue in the regulatory sense. And there are also other exceptions, you know, where they have, let's say, collected information on you on the basis of the contract in which you entered. Then, of course, there are, you know, caveats and exclusions from this general rule, you know. So, yeah, there is a certain general rule to ask for your consent for automated decision
Starting point is 00:31:26 making but this this general rule has exceptions and yeah it's not always that you have you are asked to to consent to this automated decision making and especially yeah when there is a human involved in the loop which we assume normally has to you know have a role of you know of a fail safe in a certain sense you know making sure that if the decision is not fully automated and there is a human in the loop, he or she should really play a role of this fail safe to consider whether the processing is making sense, whether the data processing result is making sense and so on. So this issue is complicated, but yeah,
Starting point is 00:32:26 there are provisions in the act in the GDPR which cover that okay so if I were to summarize I would say that well when you enter into a contractual relationship or even if you do something much much simpler like I don't know sharing your data through a web
Starting point is 00:32:42 form it's good you should actually as an individual read the fine print, read the privacy policy. And then you should probably, if you actually care about what happens to the data, you should probably proactively send a request and inquire about that rather than expect to be notified because as you just laid out, there are lots of caveats and exceptions and you may not actually get notified i just have one quick question before we move on to the next part which is a review of the ai specific regulations around the world so you mentioned the human in the loop so i was wondering is that something that can actually be inspected
Starting point is 00:33:26 and verified by some third party? So if as an organization I claim that my processes include a human in the loop and therefore I'm not obliged to follow certain rules, is there any way for that to be audited? So there sure could be ways for this to be audited. But the thing is with auditing, if we are calling something auditing, I think the first example which springs to my mind at least is the audits which we have for, let's say, financial documentation of public entities. We have those rules which require actually the involvement of an independent third party, right? So some auditor is obliged by law really to come and see how the organization is doing financially, in fact, according to the documents, right? And they have documentary audits and so on. And then on the basis of this, they provide assurance to others other market players and
Starting point is 00:34:45 whoever is interested about the financial information of this company which is, it says it should be true that there is this independent trusted third party which can and
Starting point is 00:35:01 actually has to do that step for the benefit of society basically and the market more generally to come and actually verify what's going on there. And to provide assurance. For the AI systems, we don't currently have this general obligation to invite somebody, a really independent third party, to do a similar process, but in respect of the IT systems and AI systems, which are involved, right? There is no such obligations.
Starting point is 00:35:46 Maybe with the exception of some sectoral specific rules, which may already require that kind or similar kind of verification. Yeah, there might be processes in place, but they are not universal and they are not standard to the extent the financial third requirements at least to cover some subset of AI systems. for example, a regulation to that effect actually which will soon enter into effect in New York. The New York City has adopted their local law 144 which covers exactly that. They require to conduct third-party independent audits of AI systems but only a subset of those which are used in basically employment recruiting operations and even a subset of those where the policymakers thought that there might be a high
Starting point is 00:37:29 risk of let's say undesired bias and so on in those systems which are used to pre-screen or pre-select candidates for employment or for promotion and so on and so they have devised this requirement to have those systems, AI systems, audited by an independent third party. So I would say that, yeah, we already have at least one jurisdiction which has enacted those requirements for independent third party AI audits. And there are, of course, discussions to consider this in other jurisdictions. But if we, let's say, consider what is being provided in the current, at least, version of the draft EU AI Act,
Starting point is 00:38:20 which is going to be in effect hopefully soon in Europe, we don't see those third-party audits, independent audits yet there, although we have conformity assessments. We have conformity assessments there which are supposed to perform the same role, which usually are similar role which the audits do but there is no requirement for independence so this is already a big difference and it's not often even there is a requirement to conduct it with the use of third party because oftentimes, according to at least the current wording of the text of the act, the companies, it would be sufficient for the companies to do this internal conformity assessment
Starting point is 00:39:17 and to verify the system internally. So there is no general requirement. Although we hear, even if you heard the statement by Sam Waltman of OpenAI recently in some of his public statements, he was arguing for this kind of independent third partyparty audits, at least, again, for some really high-stakes systems which might be potentially influential. So, again, not all systems, but there are discussions to have this in place. And so far, I think the potential to verify independently
Starting point is 00:40:09 that the requirements have been met is limited but in some cases it already exists and maybe we will see a development of that eventually because I think you know even sometimes it may be in the best interest of the vendor, sorry, the buyers to buy it from me, how am I going to prove that it's really trustworthy, right? Because if it's only my words, why
Starting point is 00:40:55 should they trust me, really? Because I mean, it's a huge question, especially with the systems which are operating in high stakes use cases, right? So it might be, for me, as a potential, let's say, if I'm an AI vendor, it might make actually sense for me to proactively inquire and attract, invite some independent third party to do an audit for me.
Starting point is 00:41:33 So I would be able then to provide this independently conducted assessment report to show you that I'm really compliant. So this, I think, has also potential, this kind of voluntary involvement of auditors. Indeed. potential, this kind of voluntary involvement of auditors. Indeed and not just for vendors as just mentioned but it could also actually create like another like a new market, a market for third-party independent third-party auditors in the same way like you described such actors already exist and do their work in certain domains that are heavily regulated like finance or healthcare, for example, you could have some similar, in principle at least, markets spring up for AI. I'm guessing that even in the case of this one particular legal framework by the city of New York that you
Starting point is 00:42:27 mentioned, that because it's still in its infancy, I guess that probably there are still no provisions for the type of entity that could actually fill in that role. It's probably just generic at this point. Yeah, it is generic at this this point but I think it will develop the regulations will develop based on the actual practices and when the authorities will be able let's say to to survey the market to see what are the best practices really which are emerging because nowadays i already see those uh you know entities which offer offer those kinds of services based on sometimes you know their perception of how an audit independent audit should look like but of course this is not ideal of course, this is not ideal. Of course, this is not how things are done in, let's say, in the financial markets,
Starting point is 00:43:29 because there are standards which are set not by the auditor himself or herself, but by an independent organization. And then all auditors have to follow them so it it's it makes of course much more sense to have an independent organization setting the standards and then all the market players who are willing to say that they are conducting audits you know to comply with a unified set of rules, which would relate to the audit criteria, the audit procedures and so on. And just from my experience, I see that there is a movement in this regard to come up with potentially rules which may be later on adopted by someone. I have been recently interested in what is going on in that field and I saw actually that, yeah, there are some communities which are already working to develop those kinds of frameworks and criteria. which I've recently become a contributor to, which is trying to do exactly this, to come up with independently drafted criteria,
Starting point is 00:45:14 which if then later on adopted by some governmental organization or rulemaking body, I mean, they could serve as a basis for this future rules, which would then create a framework similar to one which you already have for, let's say, financial audits. So there is some future I can see here. Okay, so I think that we already kind of have a picture of how maybe different legal frameworks around the work sort of influence each other. So you mentioned for example how this new provision in the legislation of the city of New York may actually influence, for example,
Starting point is 00:46:08 eventually the EU AI Act. And so in order to have a better idea of what are the players, so to speak, in terms of regulation in AI worldwide, would you be so kind as to quickly give us like just a checklist without much analysis of what are the the regulations that are either in effect already or being discussed around the world yeah i will not be maybe very comprehensive because you know i have primarily an interest in what is going on in the European Union, but to the extent I do know, of course, there is, apart from what I've already described in Europe, this EU AI Act, and of course there are certain provisions which apply to the AI systems which are embedded in the GDPR and the Digital Services Act, which has been also receiving much attention recently. of course, thinking or enacting or thinking about enacting some rules.
Starting point is 00:47:27 And we can mention, of course, these examples like in China, they have their mostly, I would say, sectoral rules around AI and some recommendations on trustworthy AI implementations. So, and of course, they are very specific and particular to the political situation in that jurisdiction. And I'm not sure that they may not be, they may serve as a global example, but there are some regulations in China and in particular around algorithmic recommender systems. So they have set up certain requirements in that regard and as I said, some trustworthy recommendations.
Starting point is 00:48:26 And of course, speaking of, I would say, a lighter touch, in a regulatory sense, we might say that actually, the United States, although they have in some states already again regulations on data protection and there are also some federal acts which also relate to sectoral rules on on data use yeah in terms of using using personal data or personally identifiable information, how they call it, let's say in healthcare, there is still no comprehensive federal regulation of neither AI nor personal data use in the United States. And more generally, I think the approach so far, maybe it will change, but the approach so far and with some exceptions, as I said,
Starting point is 00:49:35 like for the audits of certain AI systems in one particular sub-part of the United States. Generally, so far, they have refrained from thoroughly regulating the use of AI and data collection. And, of course, you may see that a lot of huge companies which are building AI models and offering services based on AI systems are coming from the United States. So there is sometimes this argument of course that is being made that maybe the reason why everything in terms of scaling and coming up with new ideas around AI system is to a large extent is coming right now from the United States is because of you know, Alexa regulations, lower amount of regulatory burden that the companies are facing there. So they are able to scale their products more easily
Starting point is 00:50:57 without you know, sufficient, I mean, without maybe a substantial burdens in terms of compliance and regulatory requirements, which might be, you know, a necessary thing, an essential thing for any, let's say, European business, which is subject to data protection and data retention requirements and so on yeah uh there might be uh and some some would argue that and so some others of course uh might be arguing in the contrary direction that uh actually yeah if you have this kind of regulation in place you have some kind of legal certainty which allows you you know to plan uh uh very uh like long term in a long-term way, your business activities, and you will be sure that there is a certain continuity
Starting point is 00:51:50 and you are at least compliant with the set of regulations which is in place, and it's not going to change that much in the future. So arguments can be made both ways, and I think some other jurisdictions might be giving also the examples which would be leaning towards the European or the American side of the regulatory spectrum. So, for example, there are certain draft rules on AI and actually data in Canada, which has not been so far maybe successful in terms of being comprehensive and well-received in Canada. But, I mean, there is some work and there is the draft law artificial intelligence and data act and of course there are you know other jurisdictions
Starting point is 00:52:55 and to give you one example which is particularly my own experience I've recently been involved in drafting the air regulations for Kyrgyzstan. It's a country in Central Asia. And of course it's a much smaller market as compared, let's say, to the European Union and European Economic Area. And they're actually using the EU-AI Act as a model in terms of values and principles.
Starting point is 00:53:34 I had to come up with proposals to the Parliament and government with some ideas about how there could be certain regulatory act put in place, but which would not be as maybe comprehensive and as extensive and would not have as much regulatory burden, let's say it frankly, as could be possible, even imaginable in a larger jurisdiction, so like the European Union. Because if you don't have a lot of subject matter experts, if you don't have expertise in the regulatory bodies, of course, if you want to regulate such a complex technology, or rather not the technology itself, but the use of technology and some uses of technology like AI,
Starting point is 00:54:37 well, you have to consider what you can actually do, what is feasible from the regulatory standpoint, also from the business standpoint, because obviously you would not want to suffocate all those startups which are trying to come up with new innovative products and innovative AI systems. And it was an interesting exercise in trying to actually, you know, try to come up with a mini EU Act of a sort for a smaller jurisdiction. And yeah, it was an interesting experience for me.
Starting point is 00:55:21 I'm sure that the people who are behind these initiatives also have some pragmatic way of thinking. So, for example, if a jurisdiction as large with as many subjects and as big a market as the EU is trying to lay out some rules, it has a different kind of weight than a country in isolation, such as Kyrgyzstan, for example, that you just mentioned. So in the case of Kyrgyzstan, if companies find it hard to comply to the regulation, they may as well say, well, okay, we don't care about this market so much, we're just going to pull out. In the case of EU, even though there have been some reported incidents of companies at least threatening to do that lately in view of the upcoming EU AI Act, it's going to
Starting point is 00:56:13 be harder to actually put that in practice. They're going to have to write off a very significant part of the global market and so that makes it much harder and the precedent of GDPR shows us that eventually instead of just having let's say two set of operational rules one for the EU and one for the rest of the world some at least some companies find it easier to just apply the GDPR rules wholesale to their entire global products and so this is sort of the domino effect that regulation can have. But speaking actually about the EU AI Act and knowing that it's probably the most potentially at least influential and also that we're close to wrapping up, let's focus on that to cover as the last topic, let's say.
Starting point is 00:57:05 And let's try to briefly go through the main principles of what the initial draft of the EU-AI Act included, which according to the best of my knowledge are laid out on a sort of layered approach, so a distinction of the potential impact of what its AI system may have. And what has changed between that initial draft and the latest draft that was recently approved? Yeah, of course, we will not be able to cover all aspects. I would say that generally there is an interesting change you may trace, which corresponds in particular to, especially if you take into consideration the two most recent versions, which are the one which the European Parliament has come up with in terms of compromise amendments most recently, and the European Council a little bit earlier, back in December last year.
Starting point is 00:58:14 So if you consider what the main changes were there compared to the European Commission proposal in 2021, is that actually they have started to consider those newer systems, which have come actually on the market so far, and they were not considered back then let's say in 2021 nobody except for the specialists knew about this transformer technology in terms of machine learning models and large language models like there were some already but they were not as prominent as now as right now and so you may you may see actually that even already back in december uh we already saw the the the act provisions would already include the the idea about some assistance which might be general purpose, right? Which might serve different
Starting point is 00:59:26 purposes and the original idea of like this AI developer knowing very well what the users of the systems of their systems might be, would not necessarily always hold. And this idea found its way into the text. And even more recently, if you take a look at this compromise amendments of the European Parliament we already see this again the new regulatory new notion of foundation models right so something which they now at least in the current version of text describe us as a you know entity which is different compared to an AI system. So there are different definitions for the foundation models,
Starting point is 01:00:29 which are, you know, a little bit tautologically defined as AI models and not necessarily maybe, you know, which may not be necessarily very clear to everyone, but still they differentiate those foundation models and the general purpose AI systems, which could be, you know, using, utilizing those foundation models, right? And AI systems more generally. And then, of course, there are different uses
Starting point is 01:01:02 to which those systems may be put, and some of them still are considered high risk. But we already see the tendency to consider that there will not be necessarily always high risk AI systems, but sometimes there will be high-risk users of certain systems. And now the vocabulary is changing, is evolving a little bit, and you see those kinds of developments, of course. And, you know, there are other interesting points which evolved. We see, for example, that if a certain AI system is being used in a manner where not the system itself takes an ultimate decision or something happens eventually in the real world as a result but let's say if
Starting point is 01:02:08 people or organizations are using those systems in a complementary way like the way you are using let's say writing aid or you know grammar correction function in your word processor right so you may know or you may not know that it contains some kind of you know ai technology which helps you but in the end i mean you are let's say you are drafting something and you are using it for your assistance. And then, of course, the text, the current text says that it's a different scenario as opposed to the situation where the system itself technically, so to say, decides what to do and acts upon certain objects, you know, even in the real world. And, of course, this needs to be taken into account from the risk management perspective because those systems which create real consequences and especially which are affecting people
Starting point is 01:03:13 significantly and their rights, I mean, and they need to be more stringently regulated as opposed to those which we are using just for our convenience and in the end we are taking the decisions of what to do with the actual output of those systems. So these I think the most interesting evolutionary changes which have taken place so far, both with the December text, the Council's proposal, and the European Parliament's proposal more recently. Yeah, it's an interesting development because it shows, if nothing else, it shows that the people who are in charge of these drafts regulation are at least
Starting point is 01:04:07 aware of technological developments and trying to incorporate them as much as possible to these evolving drafts. So it could be interesting to see what's the road forward for this particular framework. So what's the time frame and what are the steps of the process until it actually comes into effect? And I'm asking that again in the light of what could potentially change between now and the time that it actually becomes effective and if there is potentially a path to incorporate any additional amendments? Yeah, so now I think a lot of people already know that there will be a vote, a plenary vote in the parliament
Starting point is 01:04:53 on the text of the compromise amendments. And there is a draft decision to that effect, which has been made public so which is basically a consolidated position of the Parliament and there might be some you know minor changes to it some minor not very minor but some discussions around the prohibited uses of AI systems let's say there might be discussions about biometrics, meaning not all biometrics, but biometric identification in publicly open places.
Starting point is 01:05:36 This has been a controversial topic which has divided the parliamentarians so far. So different fractions argue differently in this kind of what actually could be the prohibited uses of certain AI systems. Other than that, I think the parliamentarian part of the text, or the version of the text has been more or less aligned in between the political fractions. So except for this list of prohibited users, it's not very likely there will be significant changes in June until this discussion and this adoption of this parliamentarian text. But then we will have this trilogue, which requires the parliament, the council, and the commission to come and discuss their approaches,
Starting point is 01:06:35 because there are three different approaches in the end, and they need to be aligned. And if all goes well, as far as I understand there is high hope on the this act being finalized and aligned all positions between these three political entities aligned by the end of this year and assuming this is the case entities aligned by the end of this year.
Starting point is 01:07:10 And assuming this is the case, which is not 100%, but so far it is likely, we will have a situation where, assuming, let's say, if it's enacted early next year, there will be a two-year period, 24-month period for the organizations, all AI providers, AI deployers, to make sure that their activities are in compliance with the future requirements. So there will be this two-year grace period to prepare. Unfortunately, this is complicated by the fact that there has to be also certain standards which has not yet been enacted, which would be useful for the industry to actually operationalize those requirements which are embedded in the Act. So there are certain standards, technical standards set on the European level. It is assumed that if you comply with those AI standards, you comply with those AI regulations, with this AI Act.
Starting point is 01:08:39 It's a presumption of compliance. This is a rule right now. I mean, it's not the new rule. It's the usual rule around regulations and technical standards in Europe. And the commission has asked the San Senelec committee to come up with those standards. There has been a requirement to develop those. My only worry is that there is little time to come up with all those standards in the meantime because it's a really huge chunk of work,
Starting point is 01:09:15 different horizontal and sectoral standards which need to be enacted and that will show organizations how to actually what what to do in practice to to to assume compliance to ensure compliance with the Act and yeah they're supposed to be in place and by 2025 or 26 and we will see if it happens as predicted, as hoped. So this will be two or three years where we will have to see how it all goes
Starting point is 01:09:56 and to prepare ourselves. Okay. Just to clarify, what kind of standards are we talking about exactly? So again, to draw the parallel, to try and make some comparison to what we know. So GDPR. In GDPR, to my understanding at least, it basically comes down to a few checkboxes, let's say.
Starting point is 01:10:17 Like, okay, do you have a CDO role in your organization? Do you have specific consent compliance forms? Do you have certain processes and certain ways of processing data that you know people about and so on? So in the case of the EU AI Act, what could those standards actually be? Are we talking about, I don't know, technical standards? Like, I don't know, I'm just thinking out loud here. Maybe you have to store your data in a certain way and process it in a certain way. Are we talking about roles in the organization, processes?
Starting point is 01:10:53 What are we talking about exactly? So there are certain standards which have to be developed in terms of complying with the Act, and they are likely to include the requirements that would enable you to set up, let's say, a data governance structure, which is envisaged by the EU Act, or to set up a risk management system, which is also envisaged by the EU Act. Of course, it applies to high-risk systems or high-risk users of AI systems. It's not necessary and not foreseen for lower risk systems. But to the extent that it is a requirement for certain systems, which are high risk, I mean, there has to be some technical standards to align the approaches.
Starting point is 01:11:55 And I don't think there will be overly prescriptive in terms of methods of achieving compliance, but there still has to be some alignment in terms of what we actually need to do if we are a business and we want to set up those structures and make sure that, let's say, there is some quality management procedure in place. I think there are some quality management procedure in place. And I think, yeah, there are some standards already which are available from other, you know, standard setting bodies
Starting point is 01:12:36 which could be considered and taken into account when SENSENELEC, this Joint Technical Committee on Artificial Intelligence, proposes or considers its standards, but they have a lot of work to do themselves as well. Okay. Well, I'm sure that you will be keeping an eye on that. And it's probably, just judging from this conversation, it's probably something for everyone to be keeping an eye on that and it's probably, just judging from this conversation, it's probably something for everyone to be keeping an eye out because the actual implementation
Starting point is 01:13:12 of the EU AI Act depends on that and it's not exactly developed at this point as I gather. Yeah, yeah. So we really need to pay close attention to what Senelec does and how they succeed and how they progress with those standards. Because, I mean, at least to everyone who is involved from the industry side in developing or deploying AI systems, I think this is very important. Thanks for sticking around. For more stories like this, check the link in bio and follow link data orchestration.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.