Storage Unpacked Podcast - Storage Unpacked 262 – The Ethics and Regulation of AI
Episode Date: October 18, 2024In this podcast episode, Chris is in conversation with Jeffries Briginshaw (Head of EMEA Government Relations at NetApp) and Adam Gale (CTO for AI & Cyber Security, NetApp) discussing the EU AI Act an...d the regulation of artificial intelligence across the world. The EU AI Act is an early introduction into the regulation of the use of AI by businesses within their engagements and interactions with customers. As explained in this conversation, there are classifications of AI types and within that, restrictions on what businesses are permitted to implement based on those categorisations. Some AI usage will be banned, while others will require human intervention and close monitoring. How should your business engage with AI and ensure compliance with the act? Listen to the discussion for more details. As mentioned in the recording, for details on what NetApp can offer, point your favourite browser to https://www.netapp.com/artificial-intelligence/ to learn more. Elapsed Time: 00:52:17 Timeline 00:00:00 - Intros 00:01:19 - Why should we be regulating AI? 00:02:30 - What will the impacts of AI be on personal and work life? 00:03:55 - What if we get regulation wrong? 00:05:30 - What happens if AI goes wrong, such as data poisoning? 00:09:04 - Existing EU/UK law has been successful at regulation (GDPR) 00:10:25 - What is the EU AI Act? 00:11:46 - “Prohibited Practices” will be banned from 2025 00:14:00 - How will the use of business in AI be regulated? 00:18:05 - The EU AI Act appears to focus on protection for individuals 00:20:56 - EU citizens are broadly positive to AI - if it is successfully regulated 00:21:52 - Compliance has an overhead - in terms of hard costs (developers) 00:25:20 - What are the penalties for not complying with the EU AI Act? 00:29:50 - What about the rest of the world - the US and elsewhere? 00:35:10 - Could we see “cross-border” complexity? 00:37:40 - What are the technology implications for AI regulation? 00:40:07 - Should businesses be demonstrating their AI compliance? 00:44:03 - What does NetApp offer customers to help AI compliance? 00:47:38 - AI will require a “big red stop button” 00:50:00 - Wrap Up Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #dfsx
Transcript
Discussion (0)
This is Chris Evans, and today I'm joined by two guests from NetApp.
I have Jeffries Brigginshaw and Adam Gale.
Gentlemen, how are you doing today?
Great, thanks very much.
Jeffries Brigginshaw, and I run government relations for NetApp in the EMEA region.
Pleased to be with you.
Hi, I'm Adam Gale. I'm field CTO for AI and cybersecurity.
Thank you for having me today.
Now, Adam, great lead in there to mention the word AI,
because that's what we're going to talk about today.
We're going to talk about the ethics of AI and the regulation of AI,
especially in Europe, because as an example, by the way,
just literally last week, I think we saw the California governor decided
he didn't want to put through some particular bill.
He vetoed a bill relating to AI.
So I think currently that the whole topic of ethics and regulation of AI is really very
sort of critical to understand, specifically because obviously the EU and the US and various
other jurisdictions are very different.
So today we're going to talk about, rather than technology specifically, we're going
to talk about the regulation of AI, what that means, how it should be done, and the implications of it.
And thank goodness I have two wonderful guests who know exactly this topic inside out.
So in order to get started, Jeffries, why don't you kick us off and explain to us why we should be regulating AI in the first place?
Totally. And just let me say it's great to be on the podcast.
Thanks for the opportunity to share some of our thoughts that we're thinking as we navigate our own AI journey. It's an exciting
one for us. And we're looking forward to sharing some of that with you. Our aim is definitely to
stimulate thinking, not trying to be a compliance guide or anything like that. We can't be that. But
certainly, it's a super exciting topic. And our starting place is that AI regulation is useful, important and necessary. Listen, nobody wants a world where AI is exploited without regard to social responsibility. We absolutely support sensible controls that provide the space to sustain and support innovation, but without leaving us open to excessive risk taking.
So it's an exciting space,
ripe for invention, modernization,
but legislation can be a really important tool
in helping us get the balance right
and leaving us to ensure that we can do what we do best,
but without causing great worry and consternation in society.
Great. So Adam, just, you know, let's dig into what AI, you know, is going to really mean for us,
because we see it everywhere. We hear people talking about AI. We hear, you know, it's going
to impact our lives, but exactly what are the implications of AI, especially on, you know,
work on personal life and everything else? That's a really good question. And it's a very big question, I think.
With the framing of the EU AI Act,
which I'll try and refer to as the Act from here
to save myself some words,
it does cover personal and it does cover work as well.
And the implications are quite far and wide.
And I think we are already seeing those implications
playing out in real time now.
You may have seen recently in the news, I think some of the large AI providers have
delayed their entry into the market in the European Union because of regulation.
Whether that be the EU AI Act or the Act itself, or whether it be a GDPR issue or a data governance
issue is yet to be seen, but this is already affecting the workplace.
It's already infecting products and things are moving so fast.
It's almost hard to keep up.
But as you can see, it affects the workplace.
We use AI at NetApp.
We have our own AI here and I can often put my emails in it and have it make them
sound like I know what I'm talking about, which is fantastic, but it also, the act
covers things like our personal life which is great too
hopefully we'll cover a little bit more about that because i often ask what do these things do for us
what do they do for me what do they do for the average person walking down the street and why
should i care so what happens if you know we get this regulation wrong or we don't do this
regulation i'd sort of point social media out as a good example of where I think we were very late to the
party in terms of any sort of regulation at all and now it all seems like the government or areas
like the EU for example all they seem to do now is just bash the social media companies and tell
them they've done something wrong you know and you do look at it and think had they regulated
at the beginning and put the framework in place we would have been in a better place
than where we are so what's the implications of of us not regulating ai i think that's a fantastic question um and i think you're
right to be fair um there was slow to regulate in the social media space in my opinion and maybe
that is why we are seeing more regulation coming thicker and faster now particularly things like
this act and if we don't things do out. There was an example of a company
at turnstiles in the UK, in railway stations, taking pictures of people going through those
turnstiles. And they were using AI and emotion recognition software. So they'd be trying to
find whether you were hungry, sad, angry. And again, they were using AI for this.
The answer was that what they were trying to do
was do predictive crime analysis.
If someone's angry, it may lead to an incident and stuff.
But I believe if you Google this,
it's quite well out there.
There was also talk of using it for marketing purposes.
I assume such as marketing you a sandwich if you're hungry.
So we have to ask ourselves,
are these the things that we want?
Do I want a picture of myself
taken do i want someone to be marketing specifically to me using ai so i believe the act goes in
somewhere to cover those sort of things i think that's really interesting as an example um and
that sort of makes me think about a couple of things there i think first of all if it went wrong
what does that lead to you know if you think about somebody who's
pretty that's going to sort of affect how you walk through a railway station you know maybe you get
alerted to the to the police and peace could walk over to you and decide that they're going to have
a word with you about something and you've literally done nothing wrong you know so what
could go wrong with ai and you know an area i've thought that is potentially a real problem is you
know that whole thing of data poisoning whether people could just inject junk into the some sort of stream of data maybe we all maybe when it takes our picture we
hold up a fake picture or something like that i don't know we don't see what we see what we can
get away with but you know what could go wrong with this sort of thing yeah i think that's a
fantastic one you just pointed to the data poisoning and i believe that's actually something
that is covered within the act um i think it is article 15. now the act itself is
broken down into these articles as all all the articles and they're all self-titled quite easily
so if you pick this up and you decide to read it then you know it's quite it's accessible you could
read it as the average person or i.t user would understand what he's reading and i believe it's
article 15 covers exactly that data poisoning and the worry here is that we're training our models, we're training our AI on data.
And it's like a sausage machine.
Bad data in, bad AI out.
And there is a great example of this.
Although this is not specifically data poisoning, it is a good example of bad data in,
is that you could Google cheese pizza recipe and Gemini would respond with that you put glue in cheese pizza.
Now I know you don't put glue in cheese pizza. I hope everyone else knows we don't do that. My wife
might say I wouldn't because I can't cook. But the point being here is that it was trained on data
on Reddit in fact and there was a thread talking about how people who take pictures of pizza put
glue in the cheese to make it stringy to get that really long string and make
it look cool they put glue in it pvc and that pvc glue and they take a picture now the ai didn't
know the difference and it just spat that out as a recipe now if you take that example you could
logically project how i could nefariously inject bad data into an ai and get it to do something i
don't want it to do or get something i to do, which could harm other people. Another example could be injecting bad stop sign data into autonomous driving.
If the vehicle sees dirt on a stop sign,
and we've taught it to say that means accelerate, we could cause problems.
Now, we know there are safety mechanisms already in place to stop this from happening.
That's just an example, but it's a great example of what dirty data is,
or poison data training sets, and we need to protect against these ironically i think the answer is
protecting ai with more ai such as user behavior analytics and those sort of things but that's a
great example you mentioned that yeah if the food industry could have a real problem there couldn't
it because you know i read i heard something the other day that said they used to do something related to um cigarette smoke into mashed potato to get the color right or something
like that because it would look a bit weird on on screen so you know there's another example where
if you'd pick that out that would have been an awful uh recipe an absolutely awful recipe there's
also um hairspray you put hairspray on food. Yeah, true. That doesn't say it's very nice.
No, definitely not.
So, okay, so that's a good bit of background there on certainly what could go wrong. But I think, Jeff, it's worth just highlighting that, especially in the EU, and it's been sort of folded out into the UK to a certain degree, we actually have done pretty well with existing frameworks like GDPR.
They seem to have actually done some good.
Yeah, no, I think that's a good example.
Data privacy, it's been a key building block, in particular, because it's created a platform
of trust and confidence at the citizen level.
No doubt, it's a compliance burden.
We moan and moan and moan about that compliance that is complicated,
no doubt about that. But if you can, I mean, it would be difficult to imagine a path to digital
transformation, the path that's now intensifying with lots of kind of regulatory aspects, but it'd
be difficult to imagine it without a GDPR, a data privacy pillar to it. And it'd be difficult to
imagine it because that confidence would be lacking, I've got to say. And with AI, I
think it's moving more quickly than we moved with privacy. So
I'm not sure whether that Brussels effect will play out in
the same way that the GDPR leadership that Brussels had on
that. That's something we might we might come back to. But I
think bottom line,
that these pillars,
trust and confidence are needed.
GDPR did set a precedent
for how you can do that.
And in the AI Act,
which we can talk about more,
I think that will do that as well.
And that's why I think
we can be welcoming of it.
Okay, so why don't we dig down
into the detail, a bit more detail into the EU AI Act, or the Act as we should just call it, I guess, as you said, Adam.
Jeffries, you know, can you take us into a bit more detail about what it actually is and what it looks like?
Yeah, and I think there's a starting place distinction that we should clear up.
And that's the UK EU thing. Lots of your listeners
will be UK joiners. And of course, the EU AI Act applies in the EU and the UK left the EU. So
on paper, and in not just on paper, but in law, the EU AI Act does not apply to the UK. That said, if you're in an EU supply chain, developing, supplying, that puts technology
in the EU, the EU AI Act applies. So you can be in the UK and still subject to the EU AI Act
pretty easily. So let's keep that distinction in mind. But let's imagine that really we're
talking about everybody,
one way or another, having an interest in what is in the EU AI Act. And the Act, as you say,
Chris, came into force from August this year, went through a legislative path quite quickly in the
end. And the timelines really start to kick in from the beginning of next year. And the first timeline is a ban on prohibited practices.
And we can come back to talk about what's in each, let's say, bucket of activity.
Remember that this act has got to be rolled out in every member state.
And that has a kind of set of implementing realities that will hit every member state and they will do it in their own way as well.
But the kind of the starting place act is already in force with the timelines kicking in.
I'm not going to go on too long, but I will just say one more thing that the act takes a risk based approach,
which means that it classifies according to four categories of risk, unacceptable, high, minimal, and limited,
and different conditions apply with different timelines stretching into broadly 26, 27,
mainly. Final thing before we get into talking more beyond the definitional part,
the AI Act talks about machine-based systems and has a definition for AI.
And that definition really requires there to be a machine-based system with compute in any form.
Four elements then are needed.
Autonomy, adaptiveness, inference and outputs that can have effect and influence things.
So what else? Last, last point. and outputs that can have effect and influence things.
So what else?
Last, last point.
The Act is pretty voluminous.
Adam talked about it.
It's readable.
It's substantive.
It's got lots of detail.
But it's only a part of the story.
There's plenty more to do.
There's plenty of work in progress.
Guidance is needed.
Best practices are being developed,
standards that will specify are also needed and will be developed. There's a guideline,
for example, for GP AI models that is being developed and is supposed to be ready for next April, the commission is working on, and they've just closed a consultation on. So there's a lot out there,
but that's the frame. And, you know, I don't want to go on at length and bore too much,
but I think that opens up the conversation around that structure if you want.
Okay. So let's dive in and talk a bit about business then, Adam. How about that? Let's start
with that initially, because from what I'm hearing so far, it sounds to me that if I want to use AI in my business, I'm going to have to follow a certain set of rules.
I'm going to have to do something.
But I'm not really clear what that is at this point.
I'm hearing lots of stuff around regulation, but I'd really like to try and understand what that means in a bit more detail.
For example, if I was to decide I want to add chatbots to my website or I'm going to use something to do
analysis of customer data, what does it mean for me as a business?
Sure.
So as this is a big subject, it will probably mean something slightly nuancely different
for each business.
But taking your example there, a chatbot, that might potentially fall into the limited
or minimal risk of the risk categories that Jeffrey's mentioned earlier.
And I'll dig into these little risk categories because they really are the crux of it. And I
think they're a great way to do this too. I think this is the right approach, in my opinion,
of placing things in a risk category. But going back to your original question,
what are the implications for businesses? Things are moving so quickly. There is a lot of
implications. The first thing I tend to do when discussing this
with my customers is say sit down and read the act at least read the abridged notes and figure out
what the risk profiles are which one you would fit into it's probably the best way to get into it and
then start aligning people within your organization so look for your data experts if you don't have
one maybe set up a center of excellence.
That's probably, you know, for the larger organizations among us that would
be looking to do those sort of things, but at least align some key individuals
along the way and have them read the act or dig out the bits that are interesting
for them, me, I like cyber resiliency.
I like that sort of thing.
So the cyber resiliency articles are my favorite.
I would say I did realize I just said I had a favorite article in a piece of legislation there, and I realized that's probably quite embarrassing,
but it is a pretty good one. Now, let's just, if you don't mind, just quickly dive into some of
those risk categories, because if you fall into the high risk category, then yes, as a business,
you definitely need to pay attention. You've got things to do, but unacceptable risk, as Jeffrey's
mentioned, these are things such as cognitive
behavior and manipulation. So we can't do that. That's banned. You're not allowed, for example,
to make a child's toy, use AI in that child's toy to encourage dangerous behavior in children.
We won't allow it. That's obvious. We don't want to do those things. But they're also banning
things like social scoring. Now, some countries are looking at social scoring and to do it properly,
you need to use AI, but we're going to ban that. And real-time remote biometric identification.
So think again, I use that example of trains because I'm a train guy, cameras in train stations,
picking out people's name or picking out people's face and then logging them in the database and saying, that's Adam, he's heading west on the Jubilee line. We're not allowed to do that,
unless we have good reason to for law enforcement, such as maybe a terrorist threat or a missing person. There are exceptions to that rule, but I think it's good that we've laid out you can't do
these things because as a business, I'm not going to put my resources into that. Secondly, high risk,
and this is probably the bit that you're interested in. This is the area where there are lots of rules.
Now I can think of long lines of where this would fit into. It's things like critical infrastructure, education, employment, law enforcement, migration. All these areas
are covered by this high risk. Now, if we are going to use AI, for example, in the education
system, we already have students using it trying to cheat. So logically, we're going
to use it to try and detect cheating or even mark exams, for example. There needs to be some guidance
around this. I don't want to spend two years studying for an exam and then have an AI mark
my exam when it's only got a 70% accuracy rate. That would be bad. So where are some rules around
that? And there you will find those in the articles. You'll find them in articles 12,
for example, which covers record keeping and logging requirements.
Or in article 14, which is human oversight.
Human oversight is a natural person
that needs to authenticate the decision of the AI
when it is a high risk AI.
And which this is critical
because if we're going to roll these things out
and we will be into our medical industry and such,
we need oversight.
Yeah.
So I think Jeffrey's listening to this. And if I was an individual here out and we will be into our medical industry and such we need oversight yeah so i think um jeffrey
is listening to this and if i was an individual here and the sort of the the logic i'm hearing
about this is it does feel very much gdp gdpr ish if that's a good way to describe it in the sense
that i'm hearing about lots of individual protections for or protections for individuals
i should say And it sounds to
me like this act is definitely structured a bit like GDPR, and it's in its protection of the
individual. And it's very interesting to hear that description about the different levels of risk,
because it sounds like if I wanted to develop something that's fairly low key, doesn't
necessarily identify somebody individually, that sort of of thing that sort of level of ai would
pretty easy for me to implement and you know and get agreement for but anything that's higher risk
that involves social engineering or changing you know how we measure people based on maybe their
characteristics that sounds like it's really the really high risk side so it sounds like this is
very focused towards the individual in terms of its protection. I mean, I think that's quite straightforward.
And the answer is yes.
I think that the Act, just like, as you say, GDPR, the Act certainly points itself at technology
developers, technology users, technology experts.
And it gives citizens rights, it gives protections to citizens.
It's all about making sure that people don't overstep the mark,
but that we can take advantage of the technology where we can. So I think it is a straightforward
proposition that there is a balance between making sure that the citizen is protected,
giving routes for the citizen to make complaints and follow through on things. So it's not just sort of a static canvas,
but that the detail is available to citizens to take advantage of.
So I think it is a straightforward proposition.
You're right, it does follow the precedent,
but it fits in this kind of general balancing act
of creating space for innovation,
but making sure we can maintain trust and confidence in society
and give citizens ways of exercising
their rights to to insist upon all of those things i think that's pretty important though isn't it
you know because ultimately there'll be a lot of people who could be quite scared about what's
coming up and looking at it and thinking oh how is this going to affect me when i do anything
you know interact with any business so there is a massive confidence factor.
And that to me seems to be probably the leading reason
why I would personally want some sort of regulation in place.
I wouldn't want to restrict companies and say,
don't use this technology because, you know,
you could look at every technology we've had over the years
around anything from being able to pay for things online
to, you know, using the internet.
We do give a little bit away of ourselves
in order to get the gain of those technologies,
but there does need to be some controls around it.
So I think it is a confidence thing
in the way that it's set up.
I think just to add to that, sorry,
when surveyed 61% of Europeans look favorably
on AI and robots,
but they did say that 88 say that the
technology needs carefully managed so exactly what you're saying there people are a bit worried about
this they know it's coming but they want it carefully managed and that's taken from the
europa europa barometer website so it's a you know a publicly available survey so i think this is a
welcome addition to regulation in my opinion but there is always a but think this is a welcome addition to regulation, in my opinion. But there is always a but, and there is a compliance burden. And we do worry about the
compliance burden. I was just reading in my sort of tech press this week that Antoine Jouto,
who's the CEO of Adveni Vinta, which is, I think, an online classified group he said that between 15 and 20 percent of
his product roadmap work was dedicated to compliance and that he said meant 500 engineers
full-time during the year that is a lot of hours of engineer time to get regulation working now
we're all saying we're up for this and you need it and you can't really exploit without that exploit or deploy.
But that's a big engineering like cost.
And, you know, he goes on to say to make the comparison between the EU and the US that we create this environment in the US, the EU, and it's comprehensive and does what it's supposed to do.
But ultimately, the competitiveness level,
that does play out. And in the US, if that, let's say, number of hours is less, and it is meaningfully
less, then, you know, that's a competitiveness problem when you kind of scale it all up to what
our economy is doing and what they're able to do. And when you scale out of AI into, let's say,
the broader regulatory canvas on tech that is developing, all about trust and confidence,
all about making sure that we get the balance right. But you look at that journey to digital
transformation, it's got so many elements, GDPR, privacy, but it's also got cyber, it's got data, it's got AI now as an accelerator part of the equation.
But all of that is producing regulation. All of it is producing kind of its own verticals at both a horizontal level and a sectoral level as well.
That's a huge regulatory spaghetti junction, which is really complicated.
And I've got to say that as we go into the next commission, that compliance journey,
we're talking about it in AI, and we've seen it in GDPR, I worry that it is a huge problem that is
there. I'm not going to say how it's going to manifest, but, you know, there's
so many different spaces that are being regulated, for good reason, that it all adds up to more than
500 engineer hours, engineers per per year per whoever. And that that's got to be a competitiveness
problem as well, which it's a part of the mix. It's a part of the equation. It's a part of the
reason why the EU has taken an approach, but other people, including the UK, but certainly the US,
have looked differently at the immediate needs of the balance and are slightly watching the space,
not trying to get ahead of the technology, trying to see what,
let's say, they can do by leaning into their existing regulatory structures and existing
regulators. There's the US executive order. And in the UK, until we've got this new government,
we can talk about what they may or may not do. The practice has been to lean in to existing regulators, the ICO
for data privacy, crossovers into AI, etc, etc. So, I mean, I'm going on a bit. But the point is
that there's another side to the coin of trust and confidence, which is in compliance burden.
Compliance burden is about competitiveness. And we're all interested in competitiveness,
and competitiveness plays out in different parts of the world
and is related to the amount of regulation you put into the mix.
Right, we're going to come back to that in a second
because we're going to talk about the rest of the world in a moment, Geoffrey.
So I'll come back to you on that one in a second.
I'd just like to finish off the discussion of the EU Act, first of all, Adam,
and understand about penalties
because it's great to have an act that potentially has some teeth
and does what it needs to do. I know with GDPR there were penalties. Is there an equivalent
thing here? Are we going to see sort of turnover penalties for companies who don't comply? And how
are we even going to measure that? And how are we going to say that, you know, an AI isn't compliant?
It's going to be, it seems quite difficult really. You're exactly right. There are going to be penalties.
There is a 7% of your turnover up to, I believe it's 35 million for prohibited practices.
So if you fall into that first category and you're developing something which is banned,
and I believe that's the first thing that we will be regulating because this is going
to be a phased approach into the market.
There is 7% or up to 35 million euros, that is.
And I personally think that'll be quite easy to implement
because if something's banned, it's banned.
It's relatively black and white.
The trick will come in or the more complexity will be the following area,
which will be the highly regulated areas.
And if you fall non-compliant into that, it'll be 3% or up to 15,000.
And as I'm sure you're very well aware and the listeners are, these things are enforced.
Absolutely.
You know, you can Google GDPR tracker and you'll find lots of people have fallen
foul of GDPR and have been fined.
The mechanism or the sort of monitoring of it will be something that I guess we'll see play out.
And we'll be a bit of a wait and see approach to that.
But, yep, there are definitely fines and they will be implemented.
And how could this be expanded in the future?
Do you think the Act, is it going to be something that will be reviewed annually, biannually?
Will it be extended?
Because potentially, I guess as we learn more in this
environment and we realize the act doesn't necessarily cover areas that people hadn't
thought of yet you know there's always things that come along that people haven't really sort
of thought through um and it'll need to be amended that's going to be i guess going back to jeffrey's
point a minute ago that's additional burden to go back and reevaluate whether you're still
protecting or you're still applying the rules equivalently as you should do.
So how do we think that will be extended in the future?
So I think that's a great question.
This is such a fast moving thing that I think it would be naive to assume we'd get it right
in the first take.
So there is wiggle room within the Act.
I believe this was brought out with speed and a deliberate little bit of wiggle room in it there.
And there will be further definitions, for example, coming next year in the risk categories.
Because the WARE studies, you know, released by the Initiative for Applied Artificial Intelligence,
which I think was supported by one of the German MPs, I forget her name, I think it was Judith Gerlach,
a member of the German Parliament, and they looked at the risk categories.
And they put real live AI scenarios through them and found some grey areas.
So there is undoubtedly going to be some things that need to be tweaked or finalised or changed in the coming years.
And there is rooms and there is mechanisms for that too.
And I would add that it was always envisaged that because the EU were going out ahead of the
technology to some extent, they left themselves space. And in every regulation, there is plenty
of space for the development of technical norms. And that's going to come through guidelines,
that's going to come through best practices, that's going to come from calls for standards
to be developed in specific spaces, to specify and to explain those areas that Adam is talking about. And that's
normal and natural. There are other kind of adjacent regulatory projects, which will fill
out some of the spaces as well. There's an AI Liability Act, which is being developed as well,
just to make sure that in terms of manufacturer liability, there's sort of alignment between what's there
in existing products, liability type rules,
separate to the fines that Adam talks about
for AI specific stuff.
But in terms of kind of the legal liability regime,
that's going to be filled up.
So the space is going to be filled
and it will be reviewed.
It will be kept under review.
The political process will always be looking at this. There's talk of the area of AI and cloud being regulated. And that's another opportunity to look at, let's say, AI in the cloud and whether that has specific marketplace characteristics or even safety related characteristics that will need to be looked at.
There's going to be more regulatory activity in the space, that is no doubt, and it is a moving canvas.
Let's broaden it out and go back to the discussion we had a little bit earlier.
Let's talk about how this fits into the rest of the world, because it's fairly obvious that a lot of the companies developing this sort of technology aren't EU-based. They're definitely US-based. There's
definitely, from my perspective, a clear differentiation between the way that, say,
the US would regulate things and the EU does. But there are more areas around the world than just
that. You look at the way that X has been regulated in Brazil, where they've not complied
to what they've been told to do. Jeffries, how's this going to work? Because if every country
decides how it wants to do its own regulation, it could be incredibly complex for businesses to work out how they apply the
rules in different jurisdictions all the time. And again, burden becomes significant because
you've now got so many different rules to apply. Yeah, and we talked a little bit about the
competitiveness aspects of this, but looking and answering your question about, well, what is
happening and what are the developments? As you say, there are developments afoot pretty much competitiveness aspects of this, but looking and answering your question about what is happening
and what are the developments. As you say, there are developments afoot pretty much everywhere in
the world and also at a kind of global multilateral level, which we can talk about as well. But you
mentioned the US and Brazil and other places. And I think an interesting point about that is that a lot is happening at the same time.
That's different to the GDPR model. Brussels, with GDPR, delivered a Brussels effect,
created a template ahead of the game, and it was useful and taken up quite significantly around
the world. In this case, with AI, everybody is looking at the same stuff at the same time.
They're not necessarily regulating at the same time. Some are making the judgment that actually
they will wait for the technology before they regulate it at depth and some aren't. That's fine,
but it's all happening at the same time. So it begs the question of whether actually there'll
be an aligned path. And I think we cannot hope for too much in that space. I think everybody's trying to
answer the same questions and roughly speaking, wanting to do the same things, get that same
balance between safety and innovation. The way they do that will change in different systems.
That said, some of the processes are interoperable and international. So there is international
standard setting underway. ISO does work in this
field. So those are available as the underpinnings for when the regulatory system looks for
technical specification, some of that standards work will be happening at an international level.
Some of the global initiatives, the OECD is working on implementing the outcomes of the G7 Hiroshima process, which was all about kind of top line safety principles. So they're coming down, top down, if you want, principles top down, technical detail bottom up. And then in the middle, there's this differences between regulatory systems that will do things differently. I think what we can hope for is that the sufficient,
let's say, commonality that you can roughly be thinking the same thoughts in technology
development, even if at the margin, you'll be able to do more or less of something in a given
place. And we're seeing that already, that in terms of data scraping online for model training,
the EU AI Act is already having an effect
on what is possible and what isn't.
Whereas, for example, in the UK,
the ICO are taking an approach under their existing powers
to what they will or won't see.
And they may not be exactly the same. they may not come out in the same place. And in that difference, there is margin
for competitive advantage. So there's a balancing exercise underway. I think industry, technology
industry, would like to see less rather than more fragmentation. At the end of the day,
the opportunities of, let's say, arbitrage or
marginal benefits from one place to another are outweighed by the complexity of fragmentation. So
we would like to see alignment. I don't think we expect to see global alignment, but maybe we
shouldn't get too excited about how different it will all end up as being.
I can see it getting really complex. So for instance, you know, the iPhone's a great example
of a device where even now I'm already seeing certain features are being touted as not being
available in certain jurisdictions. So I think due to the EU's regulations, Apple are not releasing
certain features. But how do those rules apply if I take my phone and go and I'm in America?
So could somebody interpret those rules and say,
oh, well, you're in our jurisdiction now.
It might be a device you bought in the UK or the EU,
but you're now using it in the US.
So all of those things that we said we weren't going to do,
we're now going to do with your data or collect stuff from you.
And then when you leave the country again, they stop doing it.
Could it get as complicated as that?
You could just use a VPN, couldn't you?
You could, yeah.
Use a VPN.
I think this is a good point possibly to mention, though, that the Act does address innovation.
There is a requirement or a paragraph about sandboxes for regulatory compliance.
They recognize there is a burden to this and they want to encourage innovation.
So they are providing things
like an AI center of excellence.
They're doing the digital innovation hubs.
But most importantly,
they're providing these sandboxes
or mandating that we should set up these sandboxes
where you can test your AI for regulatory compliance.
I don't believe one's been set up yet,
but I do hope to see one soon.
And that for me would be great.
An area where I could develop an AI, whether it be high risk or lower down the food chain I don't believe one's been set up yet, but I do hope to see one soon. And that for me would be great.
An area where I could develop an AI, whether it be high risk or lower down the food chain,
and then putting it in and seeing if I meet compliance.
So I don't think it's lost on the regulators that you could hamper innovation by regulating.
So they have addressed it.
You know, that's a great point, Chris.
Ultimately, we already have that kind of cross-border complexity. We live in a world where products do cross borders, either because we citizens take them with us or because they get
exported under one regulatory set of conditions and really ought not to be in that other environment.
And then it comes down to market surveillance, law enforcement, and that is, I wouldn't call it patchy, but it is not a given that something would automatically be enforced
just because it in principle doesn't comply.
So imagine the complexity of checking everybody's phone
when you cross a border.
I mean, it's bad enough getting your passport sorted,
let alone having your technology examined
for regulatory compliance at point of entry.
Can you imagine how long the queue at JFK would be? Do you know what, funny enough, I was actually traveling, as you both know, I was
traveling last week for Insight. And when I landed at LAX, funny enough, I went through that route,
I walked through global entry and didn't take my passport out. I didn't, I literally looked at the
camera and it went, you're authorized and as
i walked down past all the different booths there's a guy at the end a security guy who called
out my name and said hi nice to see you again and just i walked through so somehow somewhere in the
past i agreed to who knows where i signed that you know to say that i agreed that they could use my
image to the degree that they're now able to profile me and know it's me coming into the country and just let me straight through.
Risk return, Chris, risk return.
Do you know what? I looked at it and thought, I'm perfectly happy with it. I don't mind. But,
you know, where else are they using that that I don't know about? But, you know,
it does show you that different places do have a different sort of quality and standard as to the
way they're applying things.
But imagine, Chris, if you kind of streamlined your passport process to that degree of satisfaction then found you had to do a technology
check instead that would be yes that would be that would be really worrying because i carry a lot of
technology with me uh okay all right so let's move on and let's let's talk about the technology side
adam and you know because most of the people who are listening to this podcast will probably not be
burdened with the idea of having to think about the regulation from a higher level perspective, but certainly might be charged with being in charge of, say, the technology like storage systems or, you know, doing data management or even some of the compliance and data cleaning processes that people need to do.
So how should the IT professional be looking at this
and what should they be thinking about?
Yeah, I think it's a great question
because a lot of the processes and tools we have already apply to this.
So it's not like we need to really go out and invent lots of new things.
And I think the value really here is the data
and everyone it touches along that chain of data creation to data ingestion to data being used in a model or such.
So we need the usual suspects, if you will.
But we also need to include people who we haven't traditionally included too, such as our C-level.
And now we're seeing a lot of ethics people being involved too. I always think of Jeff Goldblum in Jurassic Park where he says
we were so preoccupied with whether or not we could, we never stopped to think we should.
So when we're developing AI, maybe we should ask ourselves, should we do this? But then a more
operational level, practical level, I need to involve my data management team because AI is
built on data. I need to make sure, as you said, it's cleansed properly and we're using our data center tools at NetApp there
to clean my data because as we've proven,
bad data in equals bad answers out.
And compliance teams, I'll be involving my compliance teams.
And more importantly, I'd be looking at my resilience
or my operational resilience teams.
Because if I'm developing an AI
and it's in the high-risk category
and it's going into an area which affects people's lives,
I want a backup.
Whether that be something like a pen you can fall back onto
or our traditional IT systems
where we run things in dual parallel
and we have another system completely to fall over to.
But all the usual places I need to be talking to
and involving them with a few extras,
such as my C-level and maybe the new role,
which I've seen a lot recently,
which is the Chief Artificial Intelligence Officer.
I think that's a new title I saw sort of kicking around on LinkedIn about a year or so ago.
But all those areas, we have tools already, particularly at NetApp, in the kit bag we
can use to help.
We're going to come and talk about your technology in a second, but let's just finish up on the
act and just ask one final question, Jefferies.
Do you think that businesses should be demonstrating their compliance?
Should we have a badge scheme that says, I've gone through the process and I've been checked out and I actually comply?
By the way, I always thought that might be a great idea for ransomware then I thought it was a terrible idea because I thought if you had something that said yeah we've been we we follow proper ransomware routines
there's a good chance that the hackers would target you because of that so should companies
be demonstrating their their compliance and talking about it is that an important thing to
do or is that you know red herring no I don't think it's a red herring and I think your
accreditation in cyberspace example is is a good one that
you know there is certification there is accreditation and those are badges of capability
but they're badges that will then set you up for for attack and we know that world very well at
NetApp on on our cyber technologies and we have the certifications and accreditations etc etc etc
but I think you know in answer to your question,
I do think that the AI Act and the regulatory world that is being developed around AI does envisage certification and accreditation as part of the overall matrix. It may not come exactly in
the AI Act itself, but it will come through the specification processes that we're
talking about. You know, at the end of the day, that hierarchy that goes through legislation,
regulation, standardization, then kicks in at a kind of certification accreditation level to badge
people who do actual things in spaces that are very specific, and they could be sectoral, or they could be technology,
resilience related, etc, etc. And there's plenty of accreditation and certification activity
underway in that broader regulatory framework that's being established, whether it's cyber or,
or data, etc, etc. So, you know, that's a matrix, that's a kind of an area that is developing, but always develops.
And so in the AI space, you can be sure that there will be related accreditations and certifications
that are developed as we become clearer on what kind of the differences if you want between skills,
technologies, applications, etc, etc. But there is a role for that in short in answer to your
question. Adam, I just wanted to finish that off, by the way, before we move on and talk about what
NetApp can do. And go back to what you just mentioned about the low risk stuff. I think
perhaps maybe when you know, we talk about this, it may be a good idea to give people a couple of
good examples, because I always think examples really help sort of qualify what that means.
So what examples could you give in that situation?
So this is, yeah, a great question,
because I think it affects the everyday user.
Now, there are some requirements around here,
around things like generative AI,
which we already have out in the market.
So, for example, if you're creating images,
you have to disclose that that image is AI generated.
And this is where, if you can see,
and we can all see each other, you can know that I'm bald. And what I tend if I, if you could see, and we're going to see each other,
you can know that I'm bald.
And what I tend to do is put a little image up of me with hair and make a joke
saying that if I made that image, I'd have to label it as fake, but we also
need to design that model so they don't generate illegal content that's quite
an obvious requirement, but we need to put guide rails in, stop it from doing that.
And we also need to publish summaries of our copyright data that we used for training.
So if you are developing something that would use any of those features
or would possibly be susceptible to those,
then you need to put some guardrails in and publish those things.
They seem pretty common sense, really, you'd think.
That's what I thought too.
They seem like common sense.
And I personally would like to know if the image is generated.
I specifically think of an example here of my mother who's 70 and she has access to you know Facebook and things
like that. She's a very well-versed technology 70 year old but sometimes she can be caught out by
images and she can be caught out by talking to a chatbot. So I want her to know what she's
interacting with. So I agree with it. Yeah good. Okay all right so let's talk about what NetApp
can do. So I'm just going to wind ourselves back a week. And we saw some really interesting demonstrations
on stage from the senior execs, where they were showing how your AI engine now, as they call it,
could be doing various different things. And that's great from a technology perspective. But
a lot of what we've talked about here is, you know, is regulation, the standards,
is process, you know, what does neta offer to help
customers with their ai strategy well this is my favorite bit if i'm honest with you i love talking
about this so um there are some really cool bits in this in in the articles for example things like
article 12 and what i like to do is read through them and then look at what technologies we can
align to meet these requirements because they feel very burdensome, but we actually have stuff already. So I'm just going to dip into article 12 here,
which is record keeping. And again, this applies to high risk AIs. So if you think back to those
categories I described earlier, we need to technically allow for the recording of events
of the lifetime of the AI, and we need to ensure a level of traceability. And we also need to record our input data
and identify a natural person
authenticating those results.
Now, that all sounds nebulous.
But what it's calling for there, in my opinion,
is if we are building a high-risk AI,
let's talk about trains,
if it's going to route trains dynamically
for busy areas and such,
and there's an issue, I need traceability.
I need to know why it made that decision. But also, and there's an issue, I need traceability. I need to know why
it made that decision. But also if it's a critical decision, which could affect people or
infrastructure, I need somebody, an expert overlaying saying, I agree with that decision.
Here's my fingerprint. Here's Adam. Adam said, yes, let's route that train. So here's some
requirements. Now, what I would suggest here is this is where we use things like tamper-proof snaps or immutable data,
because we need to create these logs.
We know logs are a natural attack vector for cyber criminals.
They want to delete your logs so you don't know what you've been doing,
and it makes you harder to patch your holes.
So I'd make them immutable, and I'd also ship them off into Vaults.
If this is falling into the category of a healthcare provider and it's an AI which is generating results when people are asking some questions like a virtual GP, I need to keep those for a very long time.
So I absolutely want them putting into a vault, not tampered with or anything like that.
So there's some really good technologies we already have to bring to bear there.
And it doesn't just stop there. There is requirements for things like monitoring for anomalies and dysfunctions and unexpected
performance of your AI. And again, this needs to be overseen at the top level by a person.
How do you do that? How do you monitor these things for anomalies and dysfunctions? Because
we literally don't have enough people in the industry we don't have enough cyber professionals and cyber security experts this is where i think we need to use ai to
ironically protect ai and we would use things like our user anomaly detection or cloud insights and
such and we monitor workloads we monitor what people are doing and if nefarious actions take
place if i'm adam and all i do is create PowerPoints and talk on podcasts,
but all of a sudden I started deleting financial records, that's not Adam's normal pattern of behavior.
I'd look for that.
If someone started poisoning training sets, that's not normal.
We'd flag that up and we would do things automatically.
Because again, human beings don't respond quick enough.
We have to use the technology.
I could automatically lock a user out, my data or do many things and that would answer some of the requirements of article 14.
so here's some really cool stuff that we should be doing and there is one that i often like to call
to as well which is the big red stop button there is a requirement for a great big red stop button
it actually doesn't say red that's me adding that bit in there. But you can imagine it to be red.
They're usually red, aren't they?
They're usually red, aren't they?
To stop people from pressing them.
Yeah.
But unfortunately, when I see a big red button,
I just want to press it.
And it's to interrupt the system, to stop it.
And now you can obviously think why we'd need these.
But I guess the complexity comes here is
where is that built into the product?
Where is that built into everything?
Now you could logically use that at the NetApp level
and just pull the plug with us
and stop the things from operating.
What really interesting to me
is talking about the implications of this.
If this was used in high risk infrastructure,
would we want a stop button?
How would we implement it?
I guess that's for another podcast though.
So some real good practical things we can do there.
And they're all particularly around security.
But one of the first lead-ins to the act
is around sustainability.
And we also have a very, very, very strong
sustainability message by being able to do things
like replicate workloads for training and such,
rather than actually make a complete copy of the data,
cutting down your carbon footprint
and many other use cases.
So that would be the second one for me. I think listening to what you've just said there, I'm thinking that there's a degree of doubling down on all of the things that you should have done
or should have been doing in your business in general with your data, you know, around that, making sure it's secure,
making sure that it's traceable and manageable and it's auditable and it's also
sustainable. All of those things that you should have been doing anyway, you're sort of doubling
down with it now with AI because you're going to have so much more data to work with. And
potentially we've got to make sure that data is being used efficiently and effectively.
So it sounds to me like it's taking a lot of the technologies and stuff that's already there. And
like I say, doubling down and putting a bit more effort into actually making sure that those
processes are accurate. I couldn't agree more more I think we have the processes in place
we have the methodology we've used before it's just making sure it's implemented right for the
use case of AI and we have other areas too we have our NetApp GenAI toolkit which lets you take
advantage of infrastructure in the cloud and things like that and those are the real icing
on the cake if if you will,
bits that let us do some really fancy stuff.
So I would be looking to explore those
if I was a customer playing with AI, definitely.
But yeah, completely agree with your statement.
I think we have the methodology.
It's just getting it right and implementing it.
Brilliant.
Okay, so I think this has been a really good discussion.
It's really given me a bit of an insight
to understand what I need to think about
and get into a bit more
detail of understanding what it really means to comply with an act that really will be front and
center for all of us going forward in business. Where can I learn a bit more about this? So
Jeffries, where can I learn more about the EU AI Act and even download it, whatever it is I can do
to get my head around it? Well, you'll be pleased to hear that the commission is actually very good
at providing guidelines, guidance, et cetera, et cetera.
So very easily searchable under things like AI Act, Google,
the commission's guidance on it is pretty good.
It tries to be as straightforward and non-jargon ridden as possible
and quite often succeeds quite well. So that's one thing. There's lots of information. And I think
Adam would say, read the act, don't have to enjoy it, but it isn't impossible. It is not impenetrable.
You don't have to be a technology geek to get what they're talking about. I'm taking Adam's
words out of his mouth, but he would say that he enjoys it. I don't mind looking at it from time to time either,
but have a look at it and read it, get familiar with it. And that will take you a long way.
And Adam, what about NetApp's offerings? I mean, you talked about what they all are.
Should we just say NetApp.com or just, you know, there are more specific
routes you think people should follow? Well, we do have NetApp.com or just, you know, there are more specific routes you think people should follow?
Well, we do have NetApp.com backslash artificial hyphen intelligence.
Probably just easier to Google NetApp and artificial intelligence
and then just click on the link, the first one that comes up on Google.
But you can also reach out to me or any of the team
or reach out to your account manager.
And we have a really fascinating Get the AI Handbook on our website.
But again, feel free to reach out to me
or any of the team.
We'd love to talk to you.
But I would start with the netapp.com
artificial intelligence.
Lovely.
Jefferies, Adam, thank you for your time.
This has been a really, really good discussion.
It's really given me that entry level into this,
and I'm sure it will for all of our listeners too.
So thanks very much for your time
and look forward to catching up with you about this you know another 6-12 months once we know how things
are evolving maybe we can do a review again and come back and see how it's turned out but for now
thanks for your time. Thanks Chris, great. Thank you.