Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x23: How Algorithmic Bias in ML Affects Marketing
Episode Date: February 21, 2022As machine learning is used to market and sell, we must consider how biases in models and data can impact society. Arizona State University Professor Katina Michael joins Frederic Van Haren and Stephe...n Foskett to discuss the many ways in which algorithms are skewed. Even a perfect model will produce biased answers when fed input data with inherent biases. How can we test and correct this? Awareness is important, but companies and governments should take active interest in detecting bias in models and data. Links: "Algorithmic bias in machine learning-based marketing models" Three Questions: Frederic: When will AI be able to reliably detect when a person is lying? Stephen: Is it possible to create a truly unbiased AI? Tom Hollingsworth of Gestalt IT: Can AI ever recognize that it is biased and learn how to overcome it? Gests and Hosts Katina Michael, Professor in the School for the Future of Innovation in Society and School of Computing and Augmented Intelligence at Arizona State University. Read here paper here in the Journal of Business Research. You can find more about her at KatinaMichael.com. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 2/21/2022 Tags: @SFoskett, @FredericVHaren
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Frederik van Heren.
And this is the Utilizing AI podcast.
Welcome to another episode of Utilizing AI,
the podcast about enterprise applications for machine learning,
deep learning, data science, and other artificial intelligence topics.
We've spoken at length at previous episodes about
bias and the fact that data can lead machine learning models, well, I guess,
off the tracks from what we're expected. Frederick, there's so many angles to this.
Right. I agree. I mean, it all starts with bias at the data source and it ends up with where are the models being used. And I think it's important to understand what the different behaviors are and what AI can or cannot do for you. I mean, there's no doubt that in the future, AI will be used to influence people, being it from a marketing standpoint, political standpoint, or financial
standpoint. And I think that's a good topic to have a conversation on.
Yeah, absolutely. And that's one reason that we decided to invite in somebody who's,
well, frankly, an expert on a lot of things. And that is our guest today, Katina Michael,
a professor in the School for the Future Innovation in Society. And it is so
wonderful to speak with you. You've written about so many great things. Welcome to the podcast.
Great to be with you, Stephen and Friedrich.
So tell us a little bit about yourself, about your background and your wonderful CV? Well, I'm an academic at Arizona State University. I have a joint appointment in
the School for the Future of Innovation in Society and also the School of Computing and
Augmented Intelligence. So it bridges the social sciences, the science and technology studies field
with hardcore computer science and engineering. and I direct a center there
called society policy engineering it's a collective and I've introduced a new
degree while being at ASU the degree is a master's of science in public interest
technology and so it's a hybrid we bring people in without backgrounds in
technology and with backgrounds in technology and then put them through a rigorous program where they learn about what is the public interest and how does technology deal with the public interest?
How can we better design through processes of co-design and how do we engage the public while we're doing things like technology impact assessments?
So it's a very fascinating new field.
It's really fascinating to me, especially.
I didn't mention this to you previously, but I have a very unusual degree.
I studied or majored in society and technology studies,
which actually sounds an awful lot like what you're talking about there as well.
And frankly, we can learn so much about the ways in which technology directs us
inadvertently, even though we think we're directing it. And one of the things that you've
talked about quite a lot is the way that new technologies are used to influence human
behavior. And the topic of one of your areas, one of your recent papers, of course, is the idea that
algorithmic bias and machine learning can affect marketing. Tell us a little bit more about that.
We build our industry today on algorithms, systems and systems of systems. And the way that
we get data today is much more powerful. We have more storage
capability than ever before, more processing power, mobility and social media platforms have
added to this capability. But now we're starting to look at how we can harness this treasure trove
of data that resides in a distributive manner all over the world through the internet. And many organizations
are looking at structured and unstructured data to inform their decision making, to make better
dynamic managerial capabilities possible. But also what we're seeing is possibly the misuse
and sometimes deliberate, sometimes accidental of these large data sets being poured through these
new algorithms and new methods using machine learning.
And at the end, what we have are real social implications, some of which create great benefit
for the individual and then some that create great harms.
Well, I think when we look at AI, as you say, it's all about data, right? So it's the ability to pick the right data sources, and to a certain degree, validate the data sources, because once the data gets into the model, it is kind of create a better view of the model.
But in reality, it's all about data.
So when you teach your students about marketing and AI, and you say they don't necessarily have to have an AI background, how do you tie then the data source concepts with the
outcome of an AI model?
Well, we have various types of algorithms
where we inject different marketing models,
some of it unsupervised learning,
some of it supervised,
depending on what we want to achieve.
Are we building recommender systems?
Are we trying to forecast?
Whatever we're trying to do,
but in studies we've conducted,
we see about 10 different dimensions of bias and their sources. So the
three sort of major areas are design bias, contextual bias, and application bias. And we
can unpack that throughout this podcast. But then we look at the dimensions, the sub-dimensions,
and of course the four P's come into play, product, price, place, and promotion. But beyond that,
what are we doing in the model? What are we doing in the training data set? As you just noted, Frederick, the method
we're applying, cultural understanding, social awareness, personal, and beyond. And when we start
to look at the triggers for the sources of bias in AI slash machine learning, we start to see some powerful
patterns here. You just have to look at law enforcement databases of those who have been
incarcerated. And if we're going on historical data, we could find ourselves accidentally
identifying a suspect of a crime who happens to be black. We might be doing other things
based on that historical data sets and unfortunately incarcerating the wrong people,
which has happened as a case of ML. But beyond that, it all depends on what we're trying to
achieve. And that happens to be with the sources of data that we're collecting we can't go into.
We're starting to amass the ability to analyze images and video. And that's fairly unstructured
data. And 80% of our data sets are unstructured. So if we're using training data that we're not
really sure about the source or what the source means, or it's taken out of context, you can
imagine what potentially can happen.
But when you're thinking about what I call invisible misuses of technology,
why should somebody be on a waiting list in a hospital longer
because of their demographics?
Why should somebody pay higher premium because they want to get a taxi
or transportation to a predominantly African-American geographic neighborhood? Why should women be targeted
and not have credit loan applications approved some 20% more than men? Why can't women be CEOs
and Facebook get it right with advertising? why should people who are searching for trips overseas
for a holiday be given different promotional pricing
based on whether they're Mac users or not?
This is ludicrous.
So we have to correct the biases that are inherent in the data sets,
but also that we're building into models either from stupidity
or deliberately.
Yeah, so the question I have is, how do we do this? Is it teaching the humans to include and
exclude the right datasets and to create newer datasets that fit more the ethical profile than,
I mean, I'm kind of fishing here, but I think, you know, from my
knowledge, you know, analyzing data is not easy, right? So, and it has to start at the human level,
because when you talk to an enterprise, there's always the quantity versus the quality, right?
And the quality data is the data that is non-biased and so on. There's still a concept of quality is more important than quantity and
that kind of causes this whole problem because the quantity kind of forces people deliberately
to include data just so that they have enough data for a model while in reality they're better
off to start out with quality data put the model model in production, and then just have the AI model
learn. And so you can create more quality data. That's fascinating, Frederic. I mean,
in a recent article we've written, we've just said exactly that, better data than more data.
But you see what happens when we have these high volume data
sets now available to us and we can web scrape, for example, even though web scraping is an act,
I consider to be unethical because you haven't directly received the consent of the person who's
taken the photograph or is in the image or has generated the data, e.g. the number of likes we
press on Facebook posts or Instagram.
But we're tempted.
The volume is always a temptation.
I've got another data set, yet another data set.
I can do this.
I can find out this.
And we've got the processing power now.
We've got things in the cloud.
We've got these special software applications we can run.
And so we are always being tempted. And I think you allude to a very interesting dilemma here.
We start with who we hire in our organizations.
If your group is not diverse, most likely your algorithms are not going to take into consideration diversity. diversity, if you're skewing data sets purely from historical information already, you know,
you're on a back foot there. If you're targeting particular sociocultural groups, either for or
against, okay, well, obviously, we know what's going to happen in the algorithm output. But here,
we can actually introduce quality controls. And a lot of it has to do with testing.
Why aren't we testing our IA-based algorithms or machine learning algorithms more? Why are we
saying it's unknown knowns? You know, most likely, we sort of may not know why the answer is what it
is. We can sort of speculate why we're coming up with
these outcomes depending on the type of algorithm we're using. But my question comes back to,
where's the testing? Where's the validation? Where's the actual brains in the design of the
algorithm? And we're finding that people are very sloppy in even very simplistic design. So you've
got this very complex training data set, huge. You've got these complex design
principles, and then your model just basically is crap. So there's a lot of things when we look
at the design aspect, the model, and the data, when these things actually are in unison,
that's a good thing. It seems to me that that really is one of the keys, is to consider that all three of those things work together to bring a result.
And I think that sometimes people have this assumption that a pure model,
if read proper data, will be good, and if read improper data, will be biased.
But all of these things, and the application is equally important. You have to think about why
you're using this model in this case. But one of the things that strikes me is, you know, in answer
to your rhetorical question of why aren't we doing this? Why aren't we testing and why aren't we
considering it? I don't want to throw stones, but it seems to me, I guess I will, it seems to me
that there's this techno-utopian ideal, especially coming out
of Silicon Valley, that somehow computer systems are immune from bias. If we have perfect data,
and my algorithm is perfect, then it won't be biased. It can't be biased. There's no way it
could be biased. But yet that's actually
impossible because as you mentioned, for example, if you're using a system that looks at historical
loan applications to decide which loan applications to give in the future or which ones to approve in
the future, well, guess what? There's bias built in there. There's no way even a perfect model is
going to result in biased answers. And I think that that's a very,
well, again, I feel like a lot of people in the sphere of developers and so on,
they wish that wasn't the case and they wish it would go away, even though maybe data scientists
and machine learning folks understand that this is the truth.
I think, Stephen, I can point to some examples just to extend what you're saying. How dare we
confuse African-American people as tags, as gorillas on search engines? How dare we?
How dare we, when we type in black girl, get pornography coming up?
How dare we have this unfairness in our algorithms to come out with things that are just
absolutely not on in society? This hasn't continued to occur as well with areas of Latino, areas of different demographics and cultural groups, but there's racial bias now coming out in our algorithms and someone's got to say why.
Why is this occurring?
Why do biometrics, for example, tell Asian people that they've blinked when they're taking a photo?
Why do some biometrics only work with white people and not black people?
Why are these controversies continuously coming up and causing harm to different cultural settings?
And when we talk about Silicon Valley and we talk about different places that have banned, for example, biometrics and AI, surprise, surprise,
it's the very places that develop the technologies are banning the technologies.
That's got to tell us something. So in Boston, you know, a lot of biometrics are curtailed,
you know, in the public setting, in places in San Francisco. But these are the places where we generate the technology.
So what's going on here?
Is it the certain demographics are going to work,
building these applications, and then at home the rest
of the family is saying, but whoa, this is going
to cause a problem in our household.
This is not right for our privacy and our security.
But this is the anomaly, right?
So we're aware of the problems.
I'm not saying that large players, big tech,
are not trying to address this, but it's not fast enough.
You just have to do a simple search on a search engine
to realize what I'm saying is correct.
Yeah, I heard it more than once that a technology company
says that the data is the single source of truth,
meaning that they believe the data more than they believe the human being.
And from that perspective, they will always go with more data.
I mean, the question I have is kind of twofold, right?
So there are a lot of models, AI models out there
that might not be biased, might be biased.
We really don't know.
And the people that probably created it probably don't know either.
So, I mean, the first question there is, what do we do?
What has been deployed?
How do we correct it?
And then moving forward, how can we maybe not control, but validate, verify, and figure out if something is biased or not?
Because I think a lot of the AI innovation today
happens by startups.
And those startups,
they don't have a data person on board, right?
That validates the data.
They typically have somebody who knows
where they can get some data, use the data.
And it's only when the model is out
where it's really too late
and they have paying customers and then it
becomes more of an economical problem.
It's not an excuse, right?
I'm just saying, you know, how do we address what's deployed and how do we address the
foreseeable future?
Well, speaking of startups, Frederik, there are a number now coming up in the design justice
space.
They call themselves algorithmic auditors. They call themselves performing algorithmic
justice, ensuring explainability and a whole host of other things to keep people accountable.
But increasingly now, you do have the defense forces, you do have national government agencies,
for example, in the US, who are seeking collaborations, because they know this is
going to be increasingly a major issue because we're
talking not just about you know what you did on instagram yesterday we're talking about images
that are captured of your face in public cctv we're talking about the intonation in your voice
right your accents your speed that can denote whether you're healthy or not healthy. You know, there are certain
diseases, syndromes, for example, that you may not be aware you're living with, but it's all
on your face. And so there is nothing to stop us in the future from detecting someone with
Turner syndrome, Down syndrome, Noonan syndrome, and a whole host of other syndromes, which are all
musculoskeletal to an extent and visual, although you may not even know you have the syndrome.
A camera could tell you that. So I think the design justice space is ripe for new startups.
I think we have to go down this path. As you say, not all startups have the capability to hire someone in
the data space or the testing space. But I would say don't launch your products if you can't do
that part of the process. When, for example, I see Facebook washing their hands after publishing,
you know, I'm not saying directly, but publishing on their platform real-time murders,
and then allowing the redistribution of those murders as people, you know, download the videos
and repost, well, you shouldn't be able to do that. And if you can't stop it, then you've got
to think about what you're doing with your services. And it's a big call to make here, but
I can't say, even if something has been accidental, that the provider or the platform provider is not to be held accountable.
This is wrong.
And I think you're asking me how do we correct all this.
It's through collective action.
We've got to care as consumers.
Safiya Noble constantly talks about this in her talks on these kinds of technologies of oppression
for particular socio-cultural demographics. How are we going to root out racial bias and gender
bias and how many other demographic biases and isms if we are not actively as a society caring
enough? So to that point, for a search engine to synonymize pornography with a particular
demographic,
it means certain people have been searching this in the history of the searching capability,
but also that the algorithms are not being effective. And if they're not being effective,
they've got to be remodeled. Sorry, but that's just how it goes.
And it seems that one of the things I'm hearing from you, and I firmly believe, is that companies have to be proactive in this.
They can't just assume that they're going to have a clean data set.
They can't just assume that there's no, you have to look at it and you have to consider it and you have to think about where did this data come from? mentioned, a startup may get a data set, maybe a public data set, but they may not be aware of
where that data exactly came from and what they're working with as their input. If your public data
set came from Finland, then it's going to have a lot of different faces than if it came from
Nigeria. And that can very much affect the quality of the application that you're creating. And similarly, you also
have to then take the next step, which is to, as I think I hear you saying, is to proactively
investigate and search for these things. You can't just say, well, it's clean, it's going,
let's put it into production. You have to say, well, what happens? What happens if we look for
this? What happens if we look for that? Are we going to find biases? What if we get a diverse
set of people looking into the camera or speaking or typing? Is it going to detect things about them?
And I think that maybe that's one of those things, again, that people just aren't thinking about.
They think that we've got a clean data set, everything is looking good,
let's just put it into production. And again, I guess that's the part of the Silicon Valley
mentality is to deploy quickly and break things quickly and then learn from it and iterate from it.
But in many cases, especially when it comes, as you're saying, to things that involve marketing
and sales, it can deeply affect people's lives. And that is a problem. It's a real problem when
your new application is making decisions about people's lives based on incorrect assumptions.
Yes, Stephen. And it's usually the vulnerable communities that continue to be held in a more vulnerable state because of lack of access to health, lack of access to financial credit, lack of access to transportation, lack of access to education and housing.
There's a critical infrastructure in people's lives. And if our algorithms are biasing towards those who have the propensity to pay solely, you know, if you have the money, then you get the health care, you get the transportation, you get the loan approvals.
What we're doing is we're discriminating against those who are less fortunate, who are in vulnerable situations.
And that just perpetuates an age old problem.
It doesn't allow people to thrive or
flourish. Yeah, I think you're onto something there. I'm not surprised with your science and
technology studies background. We've been talking about the need to inject humanities scholars,
social scientists in the engineering process, in the technological development process.
That's exactly what the Public Interest Technology Programme is about.
How can we be more human-centred and humane by embedding the humanities scholar
and the social scientists in this process?
But usually we just have technologists building technologies.
That's wrong.
We have to think more creatively about the place of science
and technology studies scholars, and that's why we exist as a school,
in the School for the Future of Innovation Society,
to inject these kinds of brains and backgrounds
so that we can not have social biases,
so we can have better algorithms coming up,
that we can detect and pause and say, you know,
should that be the answer?
Or there's something wrong here. We can't use ML for this process. And sometimes you just can't.
Okay, when we're discriminating on skin tone, and gender, and claiming afterwards, oh, we didn't
know. Well, that's wrong. People know. So if it doesn't work, if the algorithm doesn't work for
everyone, then it shouldn't be in operation.
Yeah, I think it's a very interesting topic.
I mean, for us, the communities and the users, we are the most concerned about the bias.
And we are also the ones that want to make sure that there is no bias in an AI model.
So from a community standpoint, there's always only so much we can do. Do you feel that
the governments around the world are interested in bias and making it work and making it work
with the communities? Or do you feel that the governments are kind of waiting for somebody else
to come up with a magic solution? I definitely think the governments are paying attention. We just had the
IRS in the USA say they'll stop using biometrics when you log onto their website. People are
paying attention and governments have to lead the way. I mean, we all have multiple life worlds.
We just aren't technologists at startups, for example. We're not all just professors at a university or
people in the media. We wear multiple hats and we could be the victims of our own creation. So I do
think government is understanding the importance of privacy, especially of the video, the voice,
the facial, the biometrics, those unique characteristics and our traits that cannot be changed. You know, I can't
really change my voice unless I employ various tactics. I can't really change the way I look
unless I get plastic surgery or I break my nose or put on weight very quickly. The systems detect
me otherwise. And we have to be careful on what we're doing on the web. And I think the government is well aware
that laws and regulations will be instituted,
not to stifle innovation,
but to ensure that people are protected,
to ensure fairness and equity in their society,
to ensure better governance of the big data.
So we can't continually claim all we didn't know
because we've got now the GDPR in Europe,
the General Data Protection Regulation,
and a whole bunch of other things coming through legislation, which is legislating against the
misuse of these kinds of machine learning capabilities. Right. And I think every government
is different, right? When, for example, if you go to London, you know, CCTV is all over the place, right? So the, you know, being
anonymous in London cities, almost impossible while in the United States, people would riot
if the same amount of CCTV would be available as in London. And that's, I guess we could call that
a government, right? The other thing I want to bring up is, is there a lot of organizations, well, I wouldn't say
a lot, more and more organizations are scraping Facebook and LinkedIn for images and all that
stuff. And they build models that they pretty much use against you because they sell their products
to police agencies, right? So AI is not perfect, right? So you could always be picked up from the street because
somehow you resemble somebody that's in their database.
So with the national DNA database in the UK I guess you know there are various forms of
biometrics we've seen things go to the European Court of Human Rights I actually spent a whole
year looking at the SNMARPA case in the collection of biometrics.
But here, I'm not so sure I'm specifically speaking
about anonymity with respect to privacy.
But what if I was to say to you, Frederick,
by analysing the pitch and tonality and speed of your voice,
I could work out the propensity of your impulsive buying habits.
And in fact, I could even tell you what you would purchase. I could even denote whether in your family history, there were
particular types of situations that you had been exposed to. And this is the type of analysis
that's occurring through machine learning. So when we start to attach people's propensity to pay and buy and divulge information based on physical
characteristics and traits, that's when it starts to get a little bit eerie for me. Certainly,
I wouldn't want to be the person that puts an end to bringing someone to justice, for example,
that has acted in a predatorial way. But I do want to say these large scale databases
that are being used, like we're hearing now 3 billion images are being web scraped by certain
small startups to denote and to help law enforcement. Well, they're claiming 99.9%
accuracy hits. That's all bogus. And we've got to start saying, well, who's in your suspect list?
Was it someone that was innocently walking down the road or that looks like somebody who was a
predator? And you're ending up on a suspect list, but the problem is that this suspect list is
invisible to you. So you might end up on a list. You may not be brought in for questioning, but you
have been placed on a database.
And these are the kinds of things that I want us to start getting a little bit more realistic with.
This business of 99.9% based on a machine learning algorithm is so inappropriate.
What we're saying is there, what kinds of degrees of freedom in the analysis have been used?
What kinds of algorithms were used?
And where did you get your
data set? These are the kinds of things I really want us to put
and, you know, bring to the fore.
And on that note, I'll add that one of the things we've spoken
about sometimes a few times on the podcast now, is the the
actual exciting and positive ways in which machine learning
can make unexpected connections
and unexpected inferences from data. We've talked about that, for example, in computer security.
We recently talked about that in pain management. The idea that if you feed lots of data points
into a machine learning model, it may find connections and predictors that you as a person would find illogical or just you
wouldn't look for.
And it can find those things.
And what I'm hearing right now is that that can also cut in the opposite way.
So again, if you, for example, a machine learning algorithm may examine people's spending habits
and determine all sorts of potentially unsavory and undesirable things examine people's spending habits and determine all sorts of
potentially unsavory and undesirable things about people's spending habits and put that into an
otherwise sound and appropriate application simply because it found these connections.
And again, we're not trying to insult people by saying you're biased or you're not considering the biases.
And I think that sometimes our examples may be a little bit obvious. I mean, if an application
doesn't recognize black faces, anyone can understand that that's bad. But sometimes
it's not as obvious. And sometimes it's actually downright surprising the connections that machine learning can make. Eyeball movements, comments shared on social media, customer product reviews that you
might make, entertainment content that you viewed. These are all the kinds of things that are being
amalgamated, what you purchased in terms of food at a store and
took a picture of. And I know we may not necessarily think of it as being very much,
you know, the type of food that you buy, the kind of restaurant you attend, the eyeball movements,
your level of sleep deprivation, your level of, you know, possibility of being overly anxious, maybe even depressed. I mean,
we just have to look back at the Cambridge Analytica scandal of 2018 to look at how some
5,000 points were being used. And when you really go into the detail of micro-targeting, it does
become personal. What we learned from that scandal is that not only do algorithms know you better than you know yourself, but they are tying some very bizarre things together to denote certain behaviors and then infer certain things in terms of prediction.
So yes to multi-channel campaigns and if they can be there to improve our health, that's great, or our ability to be better people potentially.
But when we're using these things for discrimination and the customer is on the
back foot just before they even opened an e-commerce website, that's when I start to get
a little bit unnerved by what's going on. And so we need to move from profit and sales maximization
as the main goal of business towards sustainability of people in place.
And we've got to rejig this business model.
I'm not saying to companies they can't earn money,
but how can we get the best out of technology for the best reasons
rather than constantly looking at how we can rip off consumers and also learn so much
about them that we can get into their heads and their pockets and their potential health
spend for the rest of their lives. Well, thank you so much. Honestly,
I think that's a great summary of the entire conversation, and I appreciate that.
So now comes the time in our podcast where we shift gears a
little bit and we ask you three specific questions that you are not prepared to answer. That's the
fun of it. So our guest has not been prepped for these. And we've got a question from Frederick,
a question from me and a question from a previous podcast guest. So Frederick, as always, I'm going to throw the microphone over to you. What's your question?
So, I think we talked a little bit about it, but when will AI be able to reliably
detect when a person is lying or not? Allegedly, that's already here. 2015, I was asked by the SVC in India, by a high up police officer, and it's one of the greatest hits of our three
questions segment. But I think that you are perhaps uniquely positioned to answer this,
and that is, is it even possible to create an unbiased AI? Yes, it is. Do enough testing, have reliable data sets, ensure your model is built correctly,
and you've really focused on design. And yes, you can. Depends on what context we're talking about
and the application focus. But again, I would say there are areas where we can gain a lot of benefits, as we said on the podcast,
particularly in the diagnosis of health conditions with the consent of the individual.
And now, as promised, we're going to bring in a question from a previous guest.
And for this one, I'm actually going to turn it over to my colleague here at Gestalt IT,
Tom Hollingsworth, who has another pertinent question on this subject for you.
Hi, I'm Tom Hollingsworth, the networking nerd of Gestalt IT and Tech Field Day.
And my question is, can AI ever recognize that it's biased and learn how to overcome it?
Yes, of course it can.
And as we're building new design justice algorithms, we can put in new validation. So the testing becomes automated and we can come out with degrees of confidence of certain data. particular gender identification, particular race or ethnicity, or those with particular
religious orientations, then we have to start again. Just create another algorithm, keep going
until there is some demonstrated fairness. So we often talk about black boxes. We've got to get
away from saying that ML is a black box and come out with, well, we've done enough testing and it's exhaustive that this
is what we've come up with. We're pretty confident that it'll be deployed in the right way.
Well, thank you so much for these answers. And it's been a wonderful conversation.
I know that I could have spoken with you all day long. So where can people connect with you
and learn more about your studies? Maybe read the paper that we referenced?
Yes, the paper that we've been speaking about is in the Journal of Business Research and will be openly available for 50 days to come.
You can find me otherwise at katinamichael.com. And Frederick and Stephen, I really enjoyed speaking with you as well.
Well, thank you very much.
We're going to include a link to that in the show notes as well for our listeners.
Frederik, what's new with you?
Well, I'm still mostly helping enterprises with efficient data management and designing and deploying large-scale AI clusters.
As I mentioned in the last episode, I'm also working on a data management startup. I can be found on LinkedIn
and Twitter as Frederick V. Heron. And as for me, you can find me on Twitter at S Foskett and on
most social media networks as well, where I of course host this podcast as well as a number of
other things. And one of the things I'm excited about is that we are going to be hosting an AI Field Day event where we will have companies present their
machine learning, deep learning, data science, storage layers, all sorts of things related to
artificial intelligence. And that event is coming up soon. So go to techfieldday.com to learn more.
If you are somebody who's interested in this, please reach out to me,
and I would love to have you join us either as a delegate or as a speaker at that event.
Thank you for listening to the Utilizing AI podcast. If you enjoyed this show, please subscribe
in your favorite podcast platform, and while you're there, give us a rating or a review.
This podcast is brought to you by gestaltit.com, your home for
IT coverage from across the enterprise. But for show notes and more episodes, go to utilizing-ai.com
or you can find the podcast on Twitter at utilizing underscore AI.
Thanks for joining us and we'll see you next time.