Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 2x26: The Global Imbalance of AI Investment with Sofia Trejo
Episode Date: June 29, 2021Look at a list of the top companies in the world, and most are focused in the United States, China, and Europe, and this causes an imbalance of investment in AI. With most companies building AI infras...tructure and applications located in Silicon Valley and similar areas, how will the rest of the world catch up? Sofia Trejo joins Chris Grundemann and Stephen Foskett to discuss the implications of this imbalance, which causes an AI divide. Companies like Facebook, Google, and Amazon increasingly centralize global data through their internet access initiatives, and all are also deeply involved in developing cloud and AI applications. This poses issues for developing countries, which are increasingly dependent on these companies and susceptible to disinformation and misinformation campaigns. Most discussions of bias focus on a first-world context and do not take into account the challenges faced by developing countries, and the same is true of AI development. We must stop thinking that the solution is technological and focus instead on education and digital literacy before AI gets out of control. Three Questions Is it possible to create a truly unbiased AI? Can you think of any fields that have not yet been touched by AI? Can you think of an application for ML that has not yet been rolled out but will make a major impact in the future? Guests and Hosts Sofia Trejo, PhD in Mathematics, Specialist in AI Ethics. Connect with Sofia on LinkedIn. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 6/29/2021 Tags: @SFoskett, @ChrisGrundemann
Transcript
Discussion (0)
Welcome to Utilizing AI, the podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics. Each episode brings experts in enterprise
infrastructure together to discuss applications of AI in today's data center. Today, we're
discussing the global imbalance of AI. First, let's meet our guest, Sofia Trejo.
Hi everyone, my name is Sofia.
My background is in theoretical mathematics,
but I've been working on ethics of AI
for about two years now,
mainly looking at it from the perspective of Mexico.
That's it.
And I'm your co-host today, Chris Grundemann. I'm a consultant,
coach, mentor, and content creator. You can learn more at chrisgrundemann.com. And I'm Stephen
Foskett, organizer of Tech Field Day and publisher of Gestalt IT. You can find me on Twitter at
S Foskett and every week here on Utilizing AI. Sophie and I were talking earlier about different topics in AI and one of the things
that came up was something we haven't talked about yet here on the podcast, which is the global
implications of AI. Essentially, we've talked quite a lot about the ethics and bias of using
AI and the inherent problems of AI application development being focused in certain
countries and not in other countries. But one of the things we really didn't focus on is the
implication of that problem, the fact that we're developing applications in some countries and
we're rolling them out in some countries and yet not so much in other countries. And that's going
to cause a real problem going forward for the global population.
So, Sofia, now that I've sort of kicked that off,
I wonder if you can do a better job than me
of talking about the problem of imbalance of AI.
Well, I think there's like many issues.
So some of that relate to research, also academia,
some of that to tech companies, some of them to countries. also academia some of their to tech companies some of them to
countries so i thought perhaps we could start looking at the tech companies first and then see
where the conversation goes so i don't know i was recently looking at these uh top 100 companies and
in the top 10 at least i believe at least eight of them are tech companies. So which includes Amazon,
Microsoft, Alphabet, Facebook, Tesla, and seven of those companies in the top of the top 100,
the top 10, seven of them are based in the US. And only one of them, which is in Saudi Arabia, is an energy company.
Another two are in China and are tech companies. So actually, it turns out that 30% of the
top 100 companies like income is coming from the tech sector. So I'm thinking, where is that
money going? It turns out that it's mostly centered in
US and some of it is going to China. So I think this already has like big implications of the way
that particularly AI is being developed in the world. Yeah. And I think that everyone can see
that on a daily basis, even some of the people in the United States who may not be aware of the challenges that that causes. We can all understand that if you ask someone,
you know, who's a big computing provider, who's a big application or services provider,
you know, which kind of companies use AI, most of the names that are going to come
up immediately are going to be either in San Jose or Shanghai.
I mean, everything in the world seems to be focused in certain areas. And yet, the population
of the earth is much, much different from that. So absolutely, I think that there's this natural
flow from where tech companies are located, to where development happens, to where investment
happens, to where applications are focused.
And I think that that's the real problem.
Is a San Jose company going to be developing an AI application
that's applicable to a country in South America
or some other part of the world?
Yeah, and I just will think that one of the big issues we have currently is that not only do they own, for example, the software and they create the platforms, but they also owners of the infrastructure that of the internet submarine cables and of all the other
internet infrastructure. For example, Google only has invested in 14 submarine cables in recent
times. And so this means that they are not only owners of the platforms, but of the other resources
that we need in order to produce AI.
Yeah, that's an interesting take.
And it definitely has changed fairly rapidly, I think, right?
So one of the organizations that I work with quite a bit
is the North American Network Operators Group,
which kind of came out of the educational realm
where the internet was kind of first born
and then developed into this thing called joint techs
and then has kind of built up over
there into more and more commercial uses as the internet became more and more commercial.
And there's also been a big shift within that community of, you know, this used to be where
basically folks who were running ISPs came to talk together about how to run big networks,
right? And so the big folks were there, right? Like whether it's AT&T or Comcast or somebody
like that, but also a lot of small and regional players.
And that has definitely shifted in the past, you know, decade for sure. I think it's accelerated even faster in the last few years where the content providers themselves, these folks who
run these massive hyperscale data centers and produce applications that are on most everybody's
phones are also now building the infrastructure between them. Right. And there's a bit of a double-edged sword there,
which is kind of interesting, right?
Which is where, you know,
when Facebook is building out internet access
into, in many cases, more rural and less developed areas,
both of the US and the rest of the world,
there's something to be said for that,
that they're actually improving internet access.
But at the other hand of it, you know, the reason they're improving internet access is to provide an on-ramp into their kind of walled garden closed platform.
And to your point, the vertical integration there really creates this kind of silo of money and potentially intelligence.
A lot of skilled workers go to those places, whether it's Google or Facebook or Apple or somebody else, because of the reputation of hiring good workers. And so that kind of has compound network effects and really, really builds
up over time. And what that makes me think of is, you know, some of the things that I've worked with
a lot in the internet, which is this just idea of digital divide, right? And the fact that there are
more opportunities in countries and places where, you know, there
is internet access.
And it seems like artificial intelligence access on two different fronts is maybe similar
to that, right?
And what I mean is for internet access, it's not just about, you know, being able to look
at cat videos.
There's a lot of things that happen and come along with the internet, right?
Which is educational opportunities, communication opportunities,
protest and, you know, community involvement opportunities.
And I think the same can be said maybe even to a greater degree in some cases with AI.
So anyway, I see these things as really, really interconnected.
This kind of digital divide, split between the haves and the have-nots around the world.
And it's maybe being compounded again with AI.
When you look at these lists of companies that are amassing these massive fortunes, is that the same way you look at it, Sophia?
Is this something that each layer of technology is perhaps consolidating further and further into fewer and fewer hands, or is that overblowing it?
No, I do think that's happening.
Because, for example, going back to not only the infrastructure,
but now how dependent we are on cloud computing, for example.
So they not only own the cables, so the physical network,
but as well cloud computing, so data centers, computing computing power so a lot of even platforms like
zoom or netflix rely on these bigger companies to even operate so this is like it's like i think
they're diversifying as companies to acquire more and more resources for example amazon is not only
retail but it has amazon web services which as well has Amazon Mechanical Torque, which is used to build most AI.
And for example, most of academics use Amazon Mechanical Torque to produce AI systems.
So I think, yes, this power is getting concentrated in fewer hands.
And if there is any competition, it seems like these big tech companies are buying them so for
example facebook recently purchased instagram or whatsapp so now all of these which could have been
competitors have been added to the same companies and i think like that that is
that monopoly it's in itself really harmful for example for developing tech
in other places in the world no so who is ever going to be able to compete uh at least in messenger
or messages with either whatsapp or facebook or instagram like those areas have been already taken
i'll say um what you were saying uh about how Facebook as well has been trying to connect the
disconnected people in the world, I think that's already a very interesting topic. It turns out
that here the government had talked with Mark Zuckerberg to try to make Facebook an internet
provider. And that's already something that I will say it's a bit of a concern taking into
consideration what, I don't know if you know about Free Basics. So Free Basics was this plan
implemented by Facebook to connect disconnected people around the world. So what they did is they
gave you an app that you can use internet for free, but you couldn't use the whole internet.
So you can only use certain resources of the internet and one of them was Facebook so the amount
of users that they the amount of users increased like in in countries like
Myanmar half of the population uses population uses Facebook the same with
the Philippines so they started introducing it in some countries, and it had really, I think, questionable implications in terms of, for example, the Rohingya genocide
in Myanmar, or like the propaganda in favor of a dictator in the Philippines. So I think
it's been interesting to see how it develops because some of these countries,
they don't have, for example, privacy laws,
not data privacy laws,
or they don't have other like law enforcement
that could make these systems work better.
There is no transparency, there is no accountability.
So in terms of, for example, human rights
or like free speech or other things, it's complicated to see how these
things are interlocked. I think a lot of people like to, in the United States at least, a lot of
people like to complain about China's Belt and Road Initiative and how China is building ports
and railroads and so on and in all sorts of developing countries. But yet, at the same time, we don't consider the
sort of intellectual imperialism of Facebook, for example, as you said. And is this a topic? I mean,
Americans, I don't think, consider the implication of that. I think Americans look at this as
Facebook is helping people. And maybe that's
not true. Maybe people listening to this podcast have different opinions. But I think that as a
whole, I think Americans might look at something like that and say, oh, what's the problem? Facebook
is helping people. Is this a big topic in Mexico, for example? Are people really, really worried about Facebook's help? I don't think we're there yet. I
feel like the interest in terms of digital policy, like people are not so aware of privacy and what
it means, like a bit behind on that conversation, which is very concerning. I think especially like
the case that is now a really good example of what happens when there is no privacy and what happens with data is Cambridge Analytica, which use Facebook profiles information to influence 200 elections, including the election of Donald Trump, Brexit, amongst elections in other countries, which is where
they started piloting their research and their techniques, which include Kenya, I think Nigeria,
and other countries in Africa, the Czech Republic, I think Italy, so a lot of countries.
And so when people think about how important it is to, example to have privacy laws or for private information to be used
like in ways that we consider i don't know that it should be used i don't know if they're ethical or
not it's just like i feel people should know what their information is being used for and especially
because it's been used to con to cons used to construct AI systems which either identify you as
a criminal or they are gaydars and then they use their images to try to see your sexual orientation
or face recognition software which has been used in borders and all sorts of things and most of
those applications are built with data which is available on the internet,
which is everyone's data, basically.
Yeah, I think the implications of what humanity
and maybe a lot of this started in the United States,
but what we've done to ourselves here
is pretty interesting, right?
And so you look at, going back to your earlier point,
Facebook and Google are both doing
all kinds of different
things to increase internet access in a lot of places. And obviously with some self-serving
interests, if not a lot of those, they're both involved in providing cloud-based software for
wireless internet service providers and trying to help promote that. And so some of those things
are good, but to your point, the bargain we made there was, you know, giving up and it's now been what a decade or more of kind of personal data that is
now being mechanized and then perhaps even weaponized using AI back against us, which is a
really interesting phenomenon. And, you know, to some of the Stephen's points in the intro there, it's not just that kind of the power and money and kind of, you know, college graduates and some, you know, a lot of things kind of get magnetized to the United States just because of the infrastructure that's been built here.
But then we've kind of collected all this data here and are now running that through AI and then using the results around the world in a lot of ways.
And obviously, that's a little bit of a reductionist view of things, but it's a lot of
what's happened. And I think that has a lot of implications as well. So I don't know. I mean,
I know, Sophia, from your background, you've traveled a little bit. I think you've been
to England and Brazil and work with companies in Canada. Are you seeing those effects of AI being built on kind of a curated, maybe San Jose-centric user base having kind of real-world effects?
Well, some of, I don't know, like I have traveled a lot and something that I feel very personally like about is particularly what's been going on with face recognition in the Xinjiang region
region in China so it's been used to target the Uyghur minority people and I was in China I've
been many times in China I it's a country that I love and I find fascinating but I've been there
I did the Silk Road so all the north China. And I particularly started near to the border with Kazakhstan, where the Uyghur people live.
And already the restrictions on how of movement or control, like there is blockades, like there there's checkpoints.
There is like a lot of control already of that population and I haven't been back since but I can only imagine how harder
it is for Uyghur people now to move freely for example within a country where face recognition
systems are being used particularly to profile them. So I feel like that's already a problem
that I see as something really really bad I mean like happening in a place that I see as something really, really bad happening in a place that
I've personally been. And something that I also am curious to learn more about is because
we don't really have that much, I'll say, reporting on AI going wrong in Latin America
or in other places.
So there is actually not much knowledge
of which kinds of systems are being implemented
and for what purposes.
Like I know for example, that the Mexican government
uses face recognition software from Russia
but I don't know how good it is on Mexican faces
like on the different indigenous communities or not. So
obviously, to my knowledge, there hasn't been like an external reviewing of the software or we don't
really know how it works. So considering what's been going on already with face recognition
in the US, which I don't know if it has led to a moratorium, is very concerning that these discussions
haven't been happening here really, not now,
but we already have face recognition systems
implemented by the government.
Yeah, and that certainly is a problem
because we've seen already,
and we've talked about that in previous episodes,
that face recognition systems in particular
are extremely susceptible
to the data that they've been fed. And we've talked about challenges right here in the United
States of face recognition systems, having problems with people with darker complexions,
people who aren't represented in their input data set. And I would absolutely worry about a face recognition system developed in a place like Russia or China or America
being able to function effectively in a place where people generally look different
than the people they might have been exposed to.
Yeah, we saw that actually, Stephen, in some demos actually at AI Field Day
recently, where the algorithm that was making things more masculine made people
more bald and there were some other weird artifacts with hair and things like that
that were obviously, you know, based on the training data it had.
Yeah, like like it had problems with curly hair, like black hair.
And that is something that represents
a massive part of the world population,
but clearly it hadn't been part of the training data
for this application.
And I imagine that that would be the case
in many countries around the world
where there are features, facial features,
hair, complexion that are different from places where these things might have been trained.
And that's a real problem for these applications, right?
I will say as well that this is only a reflection of what's been going on in academia.
So again, like where is AI research coming from?
And most of it is coming from either the US, the European Union or China.
Those are like the three main countries, but they produce the vast majority of AI publications.
So other places, like for example, when people talk about bias or talk about most of the ethical
issues, they're always from the perspective of the US or the European Union, to my knowledge. Like I haven't seen work coming from different perspectives.
And this is a huge issue as well, because a lot of the problems with AI are they have to do with the local context.
So it's not working properly under like different circumstances now that this is being seen over and over again it from ai in
medicine from face recognition systems from boys like i think all ai has to be context-based so
we don't even have like we haven't been developing enough tools or enough understanding of how ai
works or could work better in different, even in different regulatory frameworks,
because most of AI ethics is always working
under the assumption that either you're in the US
or you're in Europe.
And we do not have the same regulations.
We do not have the same government.
Like we do not operate in the same ways.
So even to create strategies,
to think of how to solve these problems,
we cannot keep on thinking everyone is Europe, because we're not. So we really have to look into our own regulation, into our own context to try to even really good. But I haven't seen the same happening in Latin America,
which I hope it's a problem that gets solved soon.
But again, most AI research centers
are focused on the global north.
So we don't have many AI researchers.
Like in Mexico, there is no public AI research center.
So we don't even have an AI research center. People are doing
research on maths institutes or on physics institutes or other institutes, but we don't
even have a research center. So to even think of doing AI ethics, it's a bit far removed. Like,
we're not even at the point of having like, I don't know,
consolidated AI institutions here, for example. Yeah, that obviously poses a problem in developing
these things. I do wonder though, and it makes me think, again, I kind of come from this like
internet-based frame of mind, right? And one of the things we saw happen with kind of the big,
you know, initial conversations around digital divide and the fact that there was this, you know, connected couple of billion and, you know, several billion people who weren't connected and how that was kind of really creating, but it was fortunate enough to happen was many of these companies that were developing the connectivity and developing
internet connectivity actually kind of leapfrogged the US and Europe in some ways and went to mobile
only or mobile first communications. So India and a lot of places in Africa and many other places
really went all in on mobile and almost kind of jumped over our landline centric
views, you know, or infrastructure anyway. And I wonder if there's a similar opportunity with AI.
Is there a chance for the, you know, let's call it the rest of the world, right? Everybody who's
not the EU and US and China, maybe, to look at this with fresh eyes, take what's been developed
and make something better.
I think we could argue like, where are these people going to get, for example, the computing
power to create AI systems at research level?
There's very few institutions which have that power.
Actually, a lot of it gets outsourced to one of these big clouds, computing clouds.
So again, it's like,
are we really like autonomous? Are we really creating technology that that it's it's ours?
Or are we just relying on someone else's tools? Like, of course, you can use a lot of the
tools that are given to make AI. But like, we don't have the computing power, we don't
have, like the infrastructures to store data,
to process data.
So we're still outsourcing a lot of these things.
So for me, like we still like the dependence,
the technological dependence,
it's not only for individuals, but for tech companies.
I'll even go further to say that countries.
And I think that is a problem that i don't know how we're
gonna fix i think like policies and strategies have to be made like mexico doesn't currently
have a digital strategy or a cyber security strategy for this government but they're aiming
because now it's a constitutional right to provide internet. So they're aiming now to connect everyone,
but you connect everyone without having any reassurance of,
for example, security or what happens with fake news.
Like people as well need data literacy.
It's not the same as giving people devices and saying like,
this is going to fix the world because that's not how it works.
Now you could create other problems.
So I think, for example, data literacy is something
that I believe all countries have to work.
And I think we've seen that with fake news
and what is happening with COVID.
So how important it is not only to have the infrastructure
or the resources, but as well knowledge,
and knowledge in terms of like basic
like for example to be able to tell which content is real or not and I don't think those things are
related particularly with technology I feel like technology is only showing us that there are bigger
gaps in in the development of many many things in the. And I think literacy is one of them,
technical literacy.
And that not only relates to fake news,
but for example, with the future of work.
So now we're all gonna create these future jobs,
which people will need computers for,
and they'll need some particular skills for.
But like here in Mexico,
I don't think we're
actually trying to push for people to gain those new skills so we're really left behind in terms of
education even not even like let's produce AI like how are these people going to work in this future
where everything is going to be automated I think those are bigger questions. Absolutely. And I think that
to many in our audience, it's going to kind of sound a little familiar because we, of course,
have a massive digital literacy problems here in the United States as well. And I know that people
in China and Russia and Europe also have challenges with digital literacy. But then you kind of think about that and you step outside your bubble and you say,
oh, well, if it's bad in Cleveland, then it's got to be really a big challenge in places
where a lot of people aren't even connected or have never experienced the internet and
never experienced fake news.
That's got to be even worse in those situations.
And I think that this is really the root of a lot of these problems
is because there's this mindset of,
we're gonna develop this thing in the Silicon Valley bubble
or in the China bubble or the, you mentioned Russia,
and we're gonna develop this thing that works well
for what we're developing it for.
And then we're gonna release it on the world
and it's gonna be used wherever and however
the it can be used and that's a big problem but i wonder sophia um as we kind of uh come to the end
of our discussion do you have any suggestions is there anything we can do to help is there any any
any prospect for the future so So for me,
like one of the things
that I think is most important
and I think this podcast
makes a good example of that
is just making people
aware of technology.
What is a stake?
What is actually
like the implications
not only of AI,
but like,
for example,
now like the digital divide based on the pandemics
and who can work remotely and who can't.
Like these digital divides are only getting bigger.
And the implications that are not only education, work, no access to a lot of things.
So for example, here in Mexico, you had to log in and to go online to register for the
vaccine.
So what if you don't have access to internet,
then you don't have access to a vaccine.
So I don't know, like we really have to work.
I feel like we have to stop thinking
that technology itself will be a solution
because I feel like that's what a lot of people
keep on thinking and like realize
that there's a lot of things that we have to work on first,
like education and like digital literacy
to make this able to work for everybody. And so I would say public understanding of technology.
It's like for me, one of the things that we could do to really help make things better.
Yeah, that resonates a lot with me. The idea of this digital literacy, I read a really,
really short little pocketbook a long time ago that talked about this and kind of laid it out
in kind of stages of human development and talked about, you know, first there was kind of orators
who kind of told stories and most people just knew how to listen, but not necessarily tell
the stories. Right. And then we started writing things down and people learned how to read,
but they didn't really know how to write.
And now we've gotten to a point where, you know, a lot of the world is literate in their
language, both written and spoken.
But we've again shifted the landscape a little bit.
And now, you know, understanding a little bit about programming and how software works
and how defaults work, right?
The first time you log into Facebook and your privacy settings are already set for you,
if you don't know to go look for that and change those,
that kind of literacy of understanding
a little bit of how programming works,
it has now become this thing that's the rare piece
that only the potentially privileged elite know
and are actually controlling vast swaths of population
through whether intentionally or not,
through that knowledge.
So the idea of just general, wide-scale,
populist understanding digital literacy
and really understanding how this stuff works
and how they can affect it makes a lot of sense to me.
Yeah, it is.
And I agree with you, not to give ourselves too much credit,
but I think it's important for those of us
who are here in the United States,
who are involved in AI and enterprise technology to stop for a moment and ask ourselves,
what is the implication of what we're doing for the rest of the world? Because the rest of the
world is going to be impacted by what we do and the choices that we make. And this, of course, is even more important when you consider
just how ubiquitous AI is in the future applications in the cloud. And so from my
little bit of editorialization, I would say, and I would challenge our listeners, just think about
these factors, think about these things, and think about what you can do to help address
this problem that you may not have even considered that you have and that we have.
So now that we've kind of wrapped up the discussion, I do have to move on. I'd love
to move on to the fun part of the show where we talk of our three questions. So here we go. Don't worry. I have to
warn our audience. She has not been prepared at all for our questions, but we will go from here.
So Sophie, here we go. Number one, and this one I think is going to be an easy one for you to answer.
Is it possible to create a truly unbiased AI?
No, I don't think so.
Considering bias is like an embedded thing in human perception itself, like cognitive biases are everywhere.
So the way that we structure data is biased. The way that we create labels is
biased. The way that we produce knowledge is biased. And so, no, I don't think if we want to
emulate human structures, then no. All right. Next one. Can you think of any fields, and maybe this
is appropriate for you because you come from a different perspective in a different world than many of the folks in our audience.
Can you think of any fields of work or study or life, any fields that have not yet been touched by artificial intelligence at all.
Human rights. No.
I'm kidding. No, no, I think it's just like it's everywhere.
But as well, because I will say it applies to so many things,
to medicine, to biology, to social sciences.
So I feel like because it's applying so many different areas,
then a lot of knowledge that relates to those areas has to be related now to what's happening with AI.
So I would say no, I think everything now is related to AI, it seems.
Yep. All right. Well, then one more thing, and maybe this is an opportunity for ideas. Can you think of an application for machine learning
that has not yet been rolled out,
that nobody's done yet,
but you think would have a big impact?
Is there some maybe positive application
for machine learning?
Yeah, I tried to think about this one day
because I was like,
this is not possible that everything is just like,
of course there's some good things that are related to,
I don't know, like protecting like forests
and like nature reserves with drones.
I feel like that's nice and that has already been done,
but something that I really like,
but God knows you have to get the data
and like, let's imagine you have everything.
I like to design an AI system which will help communities be self-sustainable.
So you tell how many people, which resources, where you are.
And then it tells you like the first thing that you need to be self-sustainable.
I don't know.
It will be to how to, I don't know storage water and then you start like thinking how to build your
like your own i don't know infrastructures so like power no so maybe you can start using solar
panels and stuff so i would like to go more for like small communities being self-sustainable
and i think ai perhaps could really help improve that based
on everyone's data if we had that. Wow. Yeah, that's something that we haven't had suggested
before. And I love that idea. That's great. Maybe somebody listening will take that up,
a way to actually benefit the world. Well, thank you so much, Sofia, for joining us today.
Where can people connect with you and follow your thoughts on enterprise AI and, of course,
other topics?
Well, you can find me on LinkedIn.
And my name is Sofia Trejo.
I think I must say PhD in maths and AI ethics specialist or something, in case there are more people with my name.
Probably there are more people with my name.
We'll link to it in the show notes, too.
Great.
And Chris, what have you been working on lately?
Yeah, also love having conversations on LinkedIn. You can also see everything that I'm putting out, both written
and spoken content and other case studies at chrisgrunemann.com or follow me on Twitter
at chrisgrunemann. And of course, I do send all my stuff over to LinkedIn as well. And you can
find me on Twitter at S Foskett. And of course, you can find me here every week on the Utilizing
AI podcast on Tuesdays. And on Wednesdays, if you go to gestaltit.com, you'll see our weekly
rundown of the tech news of the week. And that's another great way to stay in touch with me.
So thanks everyone for listening to the Utilizing AI podcast. If you enjoyed this discussion,
please share it with your friends and let them know about us and have them subscribe in the
future. Also, ratings and reviews on iTunes really do help our visibility.
This podcast is brought to you by gestaltit.com, your home for IT coverage from across the
enterprise. For show notes and more episodes, go to utilizing-ai.com or connect with us on
Twitter at utilizing underscore AI. Thanks for listening and we'll see you next week.