Not Your Father’s Data Center - Ethics of AI with Peter Judge
Episode Date: August 23, 2021When the world thinks about ethical AI, thoughts wander to movie portrayals of intelligent machines used for nefarious reasons. In the real world, that’s still a possibility, but many point... to AI as the future of understanding data. Sharing his thoughts on the subject, Peter Judge, Head Editor of Datacenter Dynamics, joined host Raymond Hawkins on Not Your Father’s Data Center. Judge recently authored a feature story in the publication about AI and its problems. Judge explained the concept of ethical AI, “When you have a system that can calculate and think, you have to understand the algorithms behind it and why it’s coming to such conclusions. It’s as much about the people using it than the AI itself.” AI isn’t going to produce “wrong” conclusions, but it may not answer the questions intended. “The ethical part is about how it’s being used, not the technology,” Judge said. He pointed to the example of lung x-rays to determine which patients had COVID. “A professor looked into this, and the AI couldn’t diagnose because the training data wasn’t accurate and there were other issues.” The promise of AI was to churn through mountains of data and drive answers, but AI does what it’s programmed to do. It’s not sentient, so the proximity for human error is abundant. “Increasing the size of the haystack doesn’t mean you’ll find the needle,” Judge commented. The other area of ethics is deployment for good or possibly evil. Judge noted the example of AI analyzing online interactions and activities, then using its learning to target advertising that was much more on the unethical side. AI is important to the future, but the humans programming and using it are still on ethically shaky ground.
Transcript
Discussion (0)
So welcome listeners to another edition of Not Your Father's Data Center.
I'm Raymond Hawkins, recording here on, I think, the 11th of August.
Sometimes, no, it's the 12th. Sometimes I lose track of the days.
Here the 12th of August in our Dallas headquarters, and we are joined today by my friend and now repeat guest, the global editor of Data Center Dynamics, Peter Judge.
Peter, last time we did this, I didn't get to see your face.
So this is a thousand times better.
Peter, thanks for joining me again.
Yeah.
Okay.
So when you ask for the next one to be audio only, I'll understand.
That's right.
I know I qualify as having a face for radio.
You, I think, are a little better off.
But no, we're super happy to get to do this.
And our listeners, I think we definitely see higher engagement now with these video recordings.
So good stuff, and we're glad to do it.
So, Peter, for folks that weren't with us, I think you and I recorded our first session together almost a year ago. I think it was last summer. I should have looked it up, but I think
it's been about a year. Do you remember the date? I don't. I know it's last summer, I think.
Yeah, that's right. We were in the middle of a pandemic and sitting at our desks and
so much has changed since then.
Yes. Yeah. A lot. Yeah. I was just going to say a lot has changed, except that not much.
Yeah.
The world understands the pandemic a lot better.
But man, what a never thought the world would look this way.
It's been an incredible change.
But if you don't mind, Peter, love for you to take, I think, as we talk about things
that you know really well and areas of your expertise,
for folks to get to hear a little bit about you. I know for some of the folks that didn't hear the first time, do you mind giving us two or three minutes on just you? How you got in the publishing
business, how you got in the writing business, how you got in the business to write about technology,
and then how you got in the data center business, and then a little bit about where you're from.
So I think our listeners always enjoy hearing the who before they hear the what so you don't do you mind giving us a little peter judge
background yeah that's fine um i um studied science at university um i did also do a little
side degree in art after that um but um found myself afterwards just falling into a job as being um a technical
editor and writer round about the time that um pcs and then the internet were appearing and um
essentially from then till now it's been a continuous process of seeing everything coming and you know cycle after cycle
of um change happening so that we kind of um there's always more to to write and understand
so you know network security um and now the data center digital infrastructure revolution. It's kept me busy, interested, and engaged so far.
And I don't see that changing anytime soon.
As we do in every episode of Not Your Father's Data Center,
we have trivia questions.
So please feel free to email your answers.
You can email me directly at rhawkins at compassdatacenters.com.
Feel free to answer.
Everyone who gets the answers correct will win Amazon gift cards.
So mail in your answers.
And the first one is, all of them this time are multiple choice.
The first one being in what year was the Turing test proposed?
A, 1948, B, 1949, C. 1950, or D. 1951.
Those are your multiple choices for what year was the Turing test proposed.
Question number two.
What was the name of the first computer system to beat the world chess champion?
Was it A. Big Red, B. Yellow Dog, C, Green Machine, or D, Deep Blue?
Trivia question number two, what was the name of the first computer system to beat the world chess champion?
And for our third question, according to Statista, how big will the AR market be by 2025, just four years from now?
$126 billion, $221 billion, $301 billion, or $350 billion?
According to Statista, how big will the AR market be in 2025?
$126 billion, $221 billion, $301 billion, or $350 billion?
Email your answers to me at rhawkins at compassdatacenters.com.
We look forward to hearing from you.
And as always, we are super grateful for Peter Judge joining us. So Peter, let's take a minute.
You recently wrote an article about AI. I'm sort of jumping subjects here for a second,
if you're willing to talk a little bit about AI with us. You hear this term ethical AI.
I think there are questions about what do we get? There's certainly lots of movie subjects around what you think of. I think of maybe some classic examples. What was it? The precogs. I'm trying to think of the movie with Tom Cruise where they tried to anticipate people were going to commit crimes. That was all around some sort of artificial minority report. That's it.
Thank you for helping me remember. You have these movie images and then iRobot with all kinds of
intelligence inside the robots that I think Will Smith battled out. So there's these movie images
of what can AI be. But can you take us a few minutes, give us a few minutes around the idea
of what is ethical AI and what do we think of it? I think of deep blue winning chess who are much more expert in this than me.
However, I think the first thought is when you can have systems that can, to some extent, sort of calculate and and think and they do it fast you're you reach a slightly
undetermined area where you're not sure where what the response you're going to get back and
where it's coming from so it's really important to understand the algorithms that are behind it, to understand why they're coming to the conclusions they are.
And I think I would have thought that ethical AI
is as much about the people that are using it
as about the AI itself.
So, you know, we're not worried that AI systems are going to come to wrong
conclusions, because that's something that, you know, what we are worried about is if we
put them into positions of responsibility that they aren't ready for.'s it's it's more about the um uh the uses of it rather than the uh technology
itself so you know it's it it's a bit i suppose it's a bit like when people say guns don't kill
people um you know it's people using them um it's the same with ai uh it's it's not even important
to us to know whether it's thinking or not it's just if you've got a technology that can do something, you need to be sure that what it's doing is what you want it to do.
So, Peter, could you give us a little bit?
So I'm in complete agreement.
I think you're absolutely right.
Could you give us a minute?
I think there's two ethical questions in AI.
To your point, we program the algorithms, right?
We meaning humans, right?
There's some assumption or some thought that is written into the algorithm, some hypothesis,
right?
And that is an opportunity for an ethical question.
And then how that AI gets deployed.
Are we using it to help us figure out how to cure cancer? Are we using it
how to find criminals' faces that go through the airports? Or are we using it for some nefarious
thing? So I think there's two different ends of that ethical question. Who built the algorithm
and why? And what's their hypothesis behind it? And then how do they intend to deploy it? I think
are the two I hear you describing. Do I do that accurately, Peter?
Yeah. I mean, what we design the algorithm to do,
the problem there is our understanding, really,
of what we've asked it to do.
So, I mean, there's a lot of algorithms
are just so poorly designed and thought out,
they don't actually do what we think they're going to do.
So, you know, earlier in this year,
there's been a lot of talk about how we might,
it's really topical that we could use AI maybe to diagnose COVID well
or to evaluate treatments well, because, you know,
on a sort of even a quite sort of simple sort of machine learning level,
just to examine all the data that you've got and pick out patterns that haven't you haven't otherwise thought of
or to you know examine a whole lot of chest x-rays and see which people have got covid or not i mean
the um and um there was a yeah a professor um who kind, well, how well is this going?
And looked into all the AI projects that have been tried for helping with COVID
and found that literally none of them had helped.
And some of them were even like unhelpful.
They were, so, you know, and a lot of it's just down to thinking things
through before you start.
One AI project was really, they thought they were really on to something.
They were showing it chest X-rays.
Some people, some of them from people that
they definitely knew had covid and others from people who definitely didn't um the only thing
is that the people that didn't were younger than the people that had it and they found they they
within the um the training data they'd got the their system was really good at picking out the ones that
had got covid taken a completely different set of data and they and they looked um it couldn't
tell the difference between covid and not covid all it could tell was the difference between young
and old so if you're if you're not sure of the data you're feeding it and the way in which the
algorithm's looking at it um you may think you've got an answer and you're not it and the way in which the algorithm's looking at it um you may think
you've got an answer and you're not getting the answer that you think you are so you know um
the the fact that a lot of um ai uh sort of systems dealing with um facial recognition etc inherently biased by race because they're trained on data coming from white
people and just don't know what they're doing when they're outside of that and you know if the
researchers building those systems don't spot that because it's not something that they're um aware of it'll just um get baked into
the system and you'll end up with um you'll end up with racist ai not because ai is a bad thing
but because people didn't realize that that was a possibility when they were making it
yeah and that's something that even works yeah all sorts of technology. I mean, it doesn't have to be as complicated as AI.
If you've got a blood pressure, a blood oximeter,
something that measures how much oxygen is in your blood,
that could be a really good early warning sign of whether you've got COVID
because your blood oxygen level goes right down.
However, these things have been on sale for 20 years.
They're really, you know, relied on.
And it's only now that people are realizing that they don't work
because of the – that they don't work if your skin's dark.
Ah, yeah.
So, you know, it's, again, like any technology, if you don't understand the problem you're trying to solve and understand the setup and the limitations that you've put around your system, you get something that's not going to do the job. It'll answer the question you asked,
but the question you asked may not be the question you thought you asked.
Right. Yeah, I was reading your article in June about AI and you talked about,
hey, there's this promise we can churn through these mountains and mountains of data.
Well, churning through mountains of data doesn't inherently change your hypothesis. It doesn't inherently change your understanding.
Just because you can churn through a bunch, if you start with the wrong assumptions,
you're going to end up with the wrong conclusions is what I think I hear you describing, right?
Or with the same inherent mistakes. Yeah, yeah. So yes, I mean, simply piling more data into something, you know,
increasing the size of the haystack doesn't make it that much more likely that you're going to find
the needle. Or does it change that it's hay? You're right. That's right. That's right. Yeah.
Look, it's a big mountain of hay. It's something different now. No, no, it's still a lot of hay
and we're still looking for an answer. Yeah. Yeah. So, you know a big mountain of hay. It's something different now. No, no. It's still a lot of hay and we're still looking for an answer.
Yeah. So, you know, maybe like other technologies, we're still at a stage where it's been oversold and we're having to properly evaluate how useful it is without junking it all together. Well, I was not aware of the two examples you gave.
Those are great.
The oximeter that we clip on our finger and also the chest x-ray.
Just assuming, hey, we've got mountains and mountains of information.
We're going to come to a better conclusion.
Just not necessarily.
There may be patterns in the information you didn't know about.
Yeah.
And then, I mean, that's the accidental misapplications.
You get applications which are questionable by their nature.
It's possible to use analytics to find patterns in people's social activities online and then use those to interfere with things.
You know, it's documented that that's been used to influence public votes in your country and my country.
You know, that's an example where AI isn't broken.
It's actually doing something quite effective.
It's just that we're not that happy with what it's done necessarily.
Yeah, we're quite certain that's what we want done with it.
Right, right.
Yeah, I think that's one of those where technology often outstrips what we think we need to do legislatively, right?
We're not sure quite yet how to contemplate the notion that in my country, hey, all speech is protected.
Okay, what qualifies as speech and what qualifies as protected and how does it get managed? And it's all electronic interaction speech. Just questions that we're not yet ready to solve legislatively, I think. legislatedly i'll think yeah interesting those those are all great examples of for me how how just applying technology didn't necessarily make things better yeah yeah yeah i mean um
in most cases you can see that things are uh yes you can pick up a smartphone and realize you you
would you would rather have it than not have it.
But there's a lot more nuance to the situation than that.
Yeah, I was having a conversation with a friend of mine this past week and thought, as I look at the younger generation and the vast majority or such a high percentage of their interaction with their friends is digital through this tiny little screen. And they didn't learn to read your body language and the social cues for when I'm saying
something and it's making you uncomfortable or when you're ready to interject and I should pause
so you can speak. All of that subtle stuff that you learned by just physically being around other
human beings. I'm interested to see as this generation
that's had so much of their social interaction be digital,
how they incorporate into the professional world.
How do they map back to when they've got to go to an office
and interact around other people
that aren't their close friends
and read those social cues and communicate effectively?
I wonder how, because your point,
there's so much greatness in this phone,
but there's also, I think, things that are changing how we interact.
Yeah, and that moved to, there's another conversation I've had this week, really.
We're all doing so much more through conversations like this online,
through a screen.
During COVID, but it's really just an acceleration of what the younger
generation is going through and what society as a whole is going through.
There's some instances there, around there, where I think this is being
oversold and over-promised.
People expect to be able to turn the whole justice system into a sort of a remote video
interaction. And there's a lot of utopian thinking going on there that makes people think yes it's going to be great
but actually you know check the statistics I think you know if you're
if your life's a bit more chaotic and you haven't got a space to do this you're not going to come
across well in a video conference and you know it's proven that you know for instance um people who are um
in in difficulties and they're they're at an employment tribunal they will win their case
half the time if they're in person in the call they'll win their case 14% of the time if they're doing it, if all the judge sees of them is this two-inch face.
It's a lot easier to dismiss people.
It's a lot easier to not really respond to the person you're seeing.
It's a lot easier to convince yourself that you don't need to reach out to them
and understand them if they're tucked away on screen like that.
A two-dimensional tiny view is not the same as you and I sitting across the table from each other.
It's just not the same.
Yeah, and in a more extreme version, when people interact with people over Twitter and social media,
they can be completely inhuman to them because they stop seeing them as human.
Yeah, yeah.
I think of that phrase, the paper tiger, right?
Someone that's willing to write you a nasty letter because you're not here, right?
And that's an old term.
And what we've gotten in Twitter is uber paper tigers, right?
Not only am I mailing you a letter who has somebody I met, no, I'm not even doing that. I'm sending a 140 character ugly message to someone
I'll never see. And it brings out, I think in many ways brings out the worst in us. Yeah.
Hear, hear. Peter, thank you for joining us. And thank you for spending a little time recording
here on Not Your Father's Data Center for our folks to listen to you again. You got rave reviews the first time, so we're happy to have you back.
Thanks, Raymond. Pleasure to be here and see you next time.