Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 06x06: AI Is My Co Pilot with Chris Grundemann
Episode Date: March 25, 2024Everyone uses AI today, whether they know it or not, and it's critical for users of this technology to understand its capability and limitations. This episode of Utilizing Tech features Chris Grun...demann, a fellow podcast host and Tech Field Day delegate, talking the many ways we use AI every day. Like Frederic Van Haren and Stephen Foskett, Chris uses AI to assist with content creation in many ways, from summarization and organization of data to image generation. Content creators are using AI tools like ChatGPT, Stable Diffusion, and more to process data, but we all agree that it's best to spend time and effort to refine the output, especially when it comes to tone and voice. We are also using AI based tools for coding and structuring data, especially interactively. Although some have suggested that AI will replace content creators or coders, the technology is instead democratizing access and making it easier to use on a daily basis. It also makes computing more available to those who were previously locked out, expanding the impact of the technology we have spent decades creating. Hosts: Stephen Foskett, Organizer of Tech Field Day: https://www.linkedin.com/in/sfoskett/ Frederic Van Haren, CTO and Founder of HighFens, Inc.: https://www.linkedin.com/in/fredericvharen/ Guest: Chris Grundemann, Managing Director at Grundemann Solutions: https://www.linkedin.com/in/cgrundemann/ Follow Gestalt IT and Utilizing Tech Website: https://www.GestaltIT.com/ Utilizing Tech: https://www.UtilizingTech.com/ X/Twitter: https://www.twitter.com/GestaltIT X/Twitter: https://www.twitter.com/UtilizingTech LinkedIn: https://www.linkedin.com/company/Gestalt-IT #UtilizingAI #Storage #AI #Datacenter #Hyperscale @UtilizingTech
Transcript
Discussion (0)
Everyone uses AI today, whether they know it or not,
and it's critical for users of this technology
to understand both the capability and the limitations.
This episode of Utilizing Tech features Chris Grundemann,
a fellow podcast host and Tech Field Day delegate,
talking about the many ways we use AI every day.
Welcome to Utilizing Tech,
the podcast about emerging technology from Tech Field Day, part of the Futurum Group.
This season of Utilizing Tech is returning to the topic of artificial intelligence, where we will explore practical applications and the impact of AI on technological innovation in enterprise IT.
I'm your host, Stephen Foskett, organizer of Tech Field Day, and joining me today as my co-host is Mr. Frederick Van Haren.
Welcome to the show. Thanks for having me again. So we've talked to a lot of companies on utilizing
AI, utilizing tech this season, most of which are developing products that support AI. Obviously,
this is a big market. Obviously, it's been a big focus. It was a big focus at AI
Field Day. It's everywhere. But there's more to it than that. I mean, AI is really affecting
each of us in our daily lives, right? Yeah, I guess we're jumping the fence with
this conversation is instead of talking to the people who produce the hardware and developing
the software and bringing it to the people is how is AI being used? In some cases, people don't really understand how AI is being
used. So it's probably a good idea to have a conversation and talk about some practical use
cases. Exactly. And now the truth is that almost everyone is using AI, whether they know it or not. And many people are using it explicitly
to help them do their jobs.
But in order to have a productive conversation on this,
we're gonna invite on one of our good friends,
a field day delegate,
somebody that we know from the community,
but also somebody who's really up on not only the technology
but the risks of this technology.
And that is Mr. Chris Grundemann.
Welcome to the show. Hi, thanks for having me. Excited to have this conversation.
So let's lay a foundation here, Chris. Tell the folks a little bit, who are you?
Why are you qualified to represent the entire world in this conversation of AI?
Absolutely. Yeah, no pressure there. So yeah, my name is Chris
Grundemann. I have been an independent analyst and consultant for the last few years. And for
the last 20 years or so, I've been working on, in and around the internet. So a technologist at
heart, a network engineer by trade, and these days more of an entrepreneur, but still in the details
of technical configurations, both on kind of routing and switching hardware, but also doing some coding and quite a bit of content creation and marketing.
Right. As an entrepreneur in some really early stage companies that are bootstrapped, I do kind
of wear all the hats. And so I think from that perspective, I have a little bit of a diverse
view of kind of how and where AI can fit in and where it's I've seen it be truly helpful today and where I've seen just smoke
and mirrors so far.
And that's, I think, how Frederick and I find ourselves too.
Full disclosure, I use AI-based applications not just every day, but probably every hour
of every day.
And that's remarkable because this stuff didn't exist just a few years ago. And yet now, I mean, I'm finding myself reaching for it as a tool in my toolbox constantly.
And Frederick, I assume that you are too, right?
Yeah, exactly.
I mean, we use AI when we drive with the car using applications like Waze.
We use speech recognition automated attendance all the time.
But that's kind of from a consumer perspective,
what can we do to build and augment our own AI application?
I think that's a good conversation to have with Chris,
since Chris has been building and working with AI
for a long time.
I'm interesting to hear a little bit about
why did you go to AI and why
is AI helping you in your business? Yeah, absolutely. So I think, you know, some of the first
entries to using AI that I had personally is on the assistant on my phone. So I happen to be a
Google user and Google assistant has been kind of using more and more AI.
And that's been somewhat helpful, right?
I mean, I think for a long time, I really wanted to be able to like ask my phone something and have it do more than just execute a search for me.
And I don't know who all has seen this, but I think it's what is it?
Genesis, I think that Google just released that just came out to all the phones on a personal level and i think it's also available paid on the on the workspace level which i haven't played with quite yet just got rolled out and that looks
really exciting just because you know that that kind of to steven's point right kind of daily
interaction with just normal stuff um it's almost the promise of computers from 30 years ago right
this idea that you'd have
kind of this like personal assistant that would help you, right? The idea of, you know, a PDA
may be coming more to the front now that we have kind of AI that actually can, you know,
execute a multi-step process. So now I haven't played with this too, too much, but I know it's
doing a lot better than just, here's what I found on the web kind of answers, which I was getting, you know, before.
So that's super cool, I think.
More directly from kind of business perspective,
there's really two ways that I'm using AI.
So one is on the content creation side.
So I don't know about, you know, anybody else listening,
but a lot of the content I put out
is in like forms of blog posts or LinkedIn posts,
or maybe other social media posts. And the image has always been a bit of a thorn in my side,
I guess I'll say, just because, you know, having a fairly relevant, but maybe, you know, a little
bit off kilter image has always seemed to be kind of, you know, my version of clickbait, I guess. Right. I don't want to do a headline that's totally baloney,
but I'll put an image in there that might, you know, pique some interest and bring folks in.
And that used to mean, you know, scrolling through stock images, deciding whether I was
actually wanted to pony up money for a picture, or if I could find one somewhere for free or
something like that. And these days I just go directly to an AI image generation platform
and tell it exactly what I want and get something pretty close pretty quickly, which has been pretty
amazing. So that's one piece. And then the other piece is on like the more technical pieces of
coding and technology bits of actually getting support there as well.
Yeah, so there's no doubt that AI can help a lot. Do you feel there are enough user AI applications out there? Or do you feel that you still need to build your own applications?
Well, enough is a funny way of asking it. I think there are probably enough. Whether or not I still need to build my own is another answer, though. Um, you know, so far I've been going kind of quote unquote to the source for,
for most of this, especially on the content creation side, right. Whether it's image generation
or ideas for blog posts or kind of hashing out different things. Um, you know, so far I haven't
found any kind of commercial or otherwise tools, um, which I mean, most of these tools, right.
Are using, you know, GPT three3 or GPT-4 on the backend
or some other model, right?
I know there's stuff from Amazon or from Google.
I think Meta has their own models.
But the consumer facing tools
that promise to ingest your content
and come back with, you know, 50 million blog ideas
and 32,000 Facebook posts.
I haven't seen those tools work very well yet.
I think there's, it may just be
because I work in technology and
talk about some very specific technical topics and there's just not domain expertise built into
these tools at the level that I want to be producing content. But I find that there's,
I have a better experience going directly to say chat GPT and typing in my query and kind of
iterating with it and working with it directly.
There's some limitations there, of course, because there's a character input limit. So there's only so much information you can give it. There's definitely kind of learning to work with
AI has been a process. But anyway, to your original question, I think there probably are
enough tools, but I don't know that there are the right tools quite yet, at least on the content creation side. Yeah. So it's fair to say that AI today then kind of enables to improve your daily
business, but it's not replacing full-blown components, right? There's still some work to be
done. Definitely. Yeah. And I think it's saving me time. It feels like it is. I think, you know,
some of the things, for example, I think one of the places I found AI and text-based content generation really helpful. So let me preface of consciousness, dump a bunch of bullets and say, turn this into a press release.
And it does a really good job at taking the information I've given it and reformatting it in a way that's readable.
I try not to ask it to actually generate the content itself.
And a lot of my prompts are actually coaching it back to be like, no, no, no, don't make stuff up.
Use what I gave you and, and, and rewrite this. Um, and so I've, you know, that's kind of a lot of the prompt engineering,
so to speak that I've done is, is really pushing it to use my content, but then format it
according to whatever I'm trying to put out, whether that's again, a press release is one
that's, you know, I don't love writing press releases. So that's when I use quite a bit of,
um, but also blog posts, right. Um, you can kind of drop some bullets in and have it, write it.
Uh, I find there's some different hacks there too, right? Like having to write section by section, instead of trying to write the whole blog post, um, things like that, where You can kind of drop some bullets in and have it write it. I find there's some different hacks there too, right? Like having to write section by section instead of trying to
write the whole blog post, things like that, where you can kind of coach around it and use it.
But it's definitely, I'm not worried about it replacing me as a content creator anytime soon.
I'll say that. Well, but what you just said though, is that you're using it to do image
creation. And I want to zoom in on that for a second. I am extremely sensitive as a photographer. I am extremely sensitive to using sort of,
yeah, basically AI generated images. I noticed that a lot of people in our community are
absolutely doing that. In fact, I'd say more than half of the people in our community,
I see a lot of AI generated images as sort of
featured images for posts. I get it. You know, you don't want to go through stock photos. You
don't have a good featured image for the post. What are you going to use, right? I'm with you.
I'm worried about that a little bit because like you, I find myself using AI as like a word processor, essentially.
It's a tool.
Absolutely.
Will I use ChatGPT to summarize things as bullets?
Yeah, it's great at that.
Will I use it to flesh those things back out?
Like I find I make ChatGPT eat its own output a lot.
So I'll have like a big, like a series of things that I want it to
summarize. And I'll take the summaries of all those things. And then I'll feed those right back
into chat GPT and say, give me a summary of these things that are themselves chat GPT summaries.
And then I'll maybe have it, okay, now give me the bullets from these things. And it's a great way to
basically condense information. And statistically,
the way that it does that is actually pretty useful because literally it is statistically
condensing information. It's finding things that are mentioned a lot and it's kind of summarizing
those and bringing those to me and bringing those to my attention. But I'm never publishing
anything that comes directly out of ChatG GPT or out of any of these
tools. And I find myself, you know, dramatically, drastically editing things, especially like you
mentioned in terms of tone and voice, but when it comes to images, well, I'm not an artist and I'm
not really capable of modifying those images. And so I basically, I, I, my fear, my concern is that
I would just use whatever it gives me and call it a day and that it's not going to be satisfactory and it's going to look.
So how do you answer the images concern?
Yeah. So a couple of things.
I mean, so one, I also I mean, I'm definitely more of a hobbyist photographer, but I've taken a lot of photos and a lot of places around the world and some that I think are pretty cool.
But what that means is I've got a huge drive full of photos. And so it's actually faster right now anyway, for me to go to
chat GPT or stable diffusion or mid journey or whatever, and kind of tell it what I want and get
an output than to find it in my own photos. So I think one, some better like AI based search for
the stock photo sites, but also for my own photo repository,
which, and honestly, Google photos does a pretty good job of this. Um, it depends on what you're
looking for and what kind of photos you have and specifically kind of how you're describing it.
But you know, if we can get better there, I think that'll help uptake of actual real
photographers stuff, right? If I can go to, um, and I'm spacing on the names of the big,
um, kind of stock photo sites right now.
But if they use AI to do a better kind of indexing and searching, I think that would be great.
And we can use more of like actual artists work.
The other thing is I have found that there are some tools, I mean, like Canva, for example,
which makes it pretty easy for non-graphic designers to go in and do some graphic design.
And so that's kind of been my process is I'll go to one of the AI generation platforms,
get something pretty close to what I want.
Then I'll take that into a tool like Canva, which has its own AI features,
including like removing the background, changing the colors.
It'll actually take, you know, that little square image you get from stable diffusion
or mid journey and turn it into, you know, 16 by nine to fit your slides or whatever.
It'll, you know, use it with some levels of distortion and things.
So these tools are definitely, I mean, this is something that like, this is an area that
I never would have.
I'm creating graphical images that I never would have. I'm creating graphical
images that I never would have imagined creating months ago,
definitely years ago, just based on these AI tools being
available for sure.
Yeah, and I'm also definitely not a not an artist, but my
daughter works a lot with Adobe Photoshop, where you kind of have
to know what you're doing. But Adobe is getting the message
there, they're coming out with an application called Adobe Firefly,
where instead of drawing everything, you provide a prompt.
You say, I want a blue sky, I want a sea,
and then you're kind of prompt sculpting, right?
And then the outcome is a picture,
and you can keep on changing your prompt until you're happy with the outcome.
Yeah, I like that a lot. That makes a lot of sense. And then, you know, kind of outside of
the content creation, I think one other way that I've found that's really boosting productivity,
not just for me, but for some of the teams I work with is, I guess, almost replacing search,
call it, or maybe we're replacing stack overflow to some degree, where in coding examples. And so
what we've done, one of the companies I work with called Full Control has a team of developers
there. And one of the first tasks we did was build our own kind of AI bot. So Elmer, which is a L-M-E-R,
it works pretty well. Elmer lives inside of Slack and can be asked questions on all kinds of
things. And because we're actually running this application internally, we feel comfortable kind
of dropping code snippets and ideas and things we're working on into it. That's stuff that I
wouldn't do to like the public model itself, right? I wouldn't go to chat GPT and start dropping in
code of some program that I'm writing for a company, right? For, you know, if it's work for
hire, but I can do that internally. And it's, it's really pretty fantastic. Actually,
it does some really good, again, it's not something where you can just say, Hey,
code me an application. And then you take the code and go paste it. And away you go.
But if you're getting some error messages that you really don't understand, instead of, you know,
leafing through pages and pages of docs, a lot of times you can say, Hey, I, I, I wrote this code.
I'm getting this error, what's going on. And it'll point you in the right direction. It's like
working with a really experienced developer who just happens to be sitting right next to you and
is ready to answer your questions at all times. Um, it's really, really helpful. And I think it's
boosted productivity a ton for myself, who's a terrible coder and it makes me mediocre.
And especially for some of the folks I work with that are really good coders and it just speeds up their process. Cause, um, you know,
I think a lot of the things that slow you down in those kinds of pursuits, whether it's configuring
switches and routers or, or other things or writing code itself is when you hit that block
and you're like, okay, well all this works. I don't quite know why this one thing isn't working.
It's almost always a stupid mistake, right? Some typo that you didn't quite catch or something, which are exactly the things that AI is really, really good at catching,
right? Oh, you missed a hyphen, you know, in this 32,000 lines of code. There it is. Go fix that,
which is really handy. Yeah, I was going to mention that actually, Chris. It's also incredibly handy
for doing things like I need this list as a JSON or I need, you know, make my YAML correct or something like that,
because I hate YAML spaces, but it is actually incredibly great at fixing my YAML mistakes.
And that helps, you know, and like you're saying, too, you know, you are worried about
privacy, you know, you are worried about privacy.
You're worried about correctness of the output.
But in the industry, we all have a joke.
Well, at least before there were AI co-pilot tools, there was the old, I'm just copying and pasting things from Stack Overflow.
Is AI, I mean, people worry about hallucination.
People worry about code quality.
Is AI really any worse than copying and pasting random stuff that you found on Stack Overflow? I mean, that definitely depends. But I
think in general, I think, yeah, order of magnitude, we're on the same scale. And that's
another thing too, right? I think, again, one of the things that I've found really, really helpful
in kind of approaching AI over the last several years, even goes all the way back to
Steve O'Reilly gave a talk at some conference I was at. And he talked about the fact that,
you know, the way he was looking at this was that basically we were all going to become managers,
managers of teams of bots, right? And that was the first time I had thought about this, like bot,
you know, AI bot as a coworker versus AI bot as a tool. And it's a subtle distinction,
but I think the nuance is really important.
And I think it's a good way to look at these things
is that if you're looking at the AI
as having some kind of agency, right?
And maybe being adaptive and amiable as well, right?
It kind of is more of a coworker.
And in that regard, you know,
what are the chances you hire a new employee
and they screw something up?
You know, I don't know that it's much better or worse than the AI hallucination.
And same for the, for probably for the same reasons though, right? That, that, that person
either is going to do something that they learned as a bad habit, or they're going to go copy it
from Stack Overflow or some other person they know. But yeah, mistakes get made anyway. I think
the key isn't to think that AI is going to be omniscient, but to treat it as
a flawed tool or coworker like everyone else. Yeah, I do think that the value of AI as far as
coding goes is to help in the areas where developers spend a lot of time, which is the debugging, right? And I think you mentioned that,
is that the debugging of the code
is a significant Achilles heel in the development.
And so I would even say that from a developer perspective,
the ability to debug and find errors
is actually more valuable than it's spitting out code,
because when it spits out code, you still have to integrate it.
Just like with content, right?
You don't have to copy and paste.
There's still work to be done.
And then when you integrate it and it doesn't work, where do you go from there?
I agree 100%.
And that's also one of the things that humans are fairly bad in general at finding finding omissions if that makes sense right so if you've
got a data set and like there's a piece of data that sticks out the human eye can catch that a
lot of times right if you've got a c of zeros and there's a one in there you'll see that um but if
you've got a c of ones and one one is missing it's really hard to see that as a human being but
machines are great at just being
able to count and understand and see those. So I think that debugging process especially lends
itself well to AI for sure. So a lot of the folks in our community are kind of buzzing that recently
Jensen Wang said that we really should stop saying that every kid should learn to code,
because that's been sort of the go-to, that everybody should learn coding because it's a computer world,
and computers are eating the world, and they should all learn coding. He says no. He recently
argued that AI is going to replace programming languages with human languages. It's going to
kill coding. We don't need kids to learn coding. We have to kids, they should focus on domain knowledge like biology, chemistry, and finance.
I want to put that to both of you, especially after what Chris was just talking about here.
Is this the death of coding?
Is AI going to mean that we don't need to code and we just need to learn how to communicate
with AI and that the AI can code for us and we should focus on other things?
So it's a really interesting idea.
I definitely can see his vision, right?
And his long-term vision.
And I think ideally that is where we get, right?
Just like the iPhone and the touchscreen
and like the app ecosystem
really changed the way people work with computers.
And before that, you know, the mouse
and the icons on a screen on a graphical
interface changed things quite a bit. Now, that being said, right, I don't know what year it was
when the mouse and the graphical desktop came out, but we still have a lot of engineers working on
CLIs behind the scenes, doing coding, doing a lot of stuff. So I don't think we can totally
abandon the idea of coding. I don don't think we can totally abandon the idea
of coding. I don't personally think that that level of like deep technology expertise is ever
going to go away. The need for it will ever go away. But I do think we can greatly empower the
rest of the population to use it much more broadly with these kind of human language interfaces,
for sure. Yeah, I think it's true for high-level applications.
You know, the type of applications that are common across the board.
You know, people dealing with finance, HR,
where the problem statement has been seen before
and where AI can do a great job kind of repeating, summarizing,
and specifically provide some
coding. I think if you go lower, closer to the hardware and to more specific, innovative
technologies, I think AI can help, but not create by itself. I don't think that AI can do that. I mean, we have to look at AI at the core. So AI learns from the past,
right? If there is no past for something, then AI is going to hallucinate and provide something
it thinks it has seen before. It might be a lot of work for a developer to have to undo a lot of
the code that was being generated based on a hallucination
as opposed to look at the code from scratch and then build history and then allow them to build
higher level applications. So in summary, high level applications, I do believe that's true
for new innovative stuff and closer to the hardware. I think AI can help, but not as much as with applications.
Yeah. And think about, I think from those high-level perspectives, right, that Frederick
you're talking about, right? These ideas like talking to your spreadsheet and some of the
things that we're seeing now being embedded into apps where, you know, instead of learning
Visual Basic and like hacking away inside of Excel to build pivot tables, what if I can just ask it
to show me the information I want it to show me? I think maybe those are kind of the tasks I'm thinking of when you talk about the high level,
Frederick. And I agree. I always think about various different kind of scientific interpretations
of the future where they show kind of technology just disappearing away. And everyone's just using
these really high level technologies, but talking to it, right? Whether it's Star Trek or, you know,
Dune just recently came out with a second part. And there's a lot of technology that nobody really seems to care about or pay attention to.
But I tend to think somewhere there's an engineer who's making sure that stuff works,
even if it's a little more behind the scenes or a little bit smaller portion of the population or whatever.
Yeah, and I don't want to guess what, you know, the CEO of NVIDIA was thinking. But I will say that you look at the co-pilot concept,
if not the co-pilot product, which you could also look at.
I think the idea is that the large language model
is the user interface.
It is the way to communicate.
Because historically, hardware vendors especially,
but software vendors as well have had libraries and example code and APIs and programming examples and so on.
And coders absolutely have gone through and use those examples as, you know, their code. That's
what we're using to build off of. And AI can do that. And there's no reason that a large language
model can't act in that role.
Basically, you know, here's what I'm trying to do.
And it goes back and in the library, it finds the right code, you know, examples, it structures it correctly, it integrates the, you know, it integrates it properly with modular, you
know, programming concepts.
And it basically spits out exactly what the vendor wants you to.
And so it's not hallucinating. It's not generating code out of nothing. It's generating code out of the examples that were provided by the hardware vendor. That actually makes a lot of sense. And to your point, Chris, I definitely think, well, in my head canon stuff and you're integrating, you know, maybe it's not a large language model in 1960 something.
But, you know, you're basically communicating with the computer and the computer goes and searches the library and finds the right stuff and surfaces that data.
And then further to your point, I think you're dead on about things like pivot tables because, oh, my gosh.
OK, set aside coders and substitute in just normal people using computers and doing
stuff. You know, hey, I've got a spreadsheet, I need to, you know, people are really bad at that,
especially people who don't focus that, you know, you look at, you know, if you've ever worked with
salespeople who are using spreadsheets to organize data and stuff, they're just bad at that. And then
I'm not criticizing them. They
shouldn't be good at that. That's not coding. That's like just bad computer use. And I got to
believe that an AI-based tool is going to really help those folks and that they shouldn't be
learning Visual Basic and pivot tables. They should be getting value out of their stuff. Sorry,
that's my soapbox, right?
I love it.
And I think you're right.
I think that that large language model, right?
That speech or text based interface
is probably the right one for most folks.
I think we have a long history in technology
of making the user interface more and more intuitive,
not always on purpose,
but definitely like that seems to be
how technology moves forward.
I think this is another example of that, of kind of democratizing access to technology
and the power behind it.
The reason that we were telling everyone to go learn to code for so long was because that
was the way to tap into the actual power of technology.
This may be another avenue that's easier for most folks, because obviously the language
that they grow up speaking, as long as that will work with a computer, I mean, that's easier for most folks because obviously the language that they grow up speaking, as long as
that will work with a computer, I mean, that's fantastically unleashing the power of people to
kind of dive into the nuts and bolts of technology. Yeah, I think the AI coding generation
will allow people who never learned the program to be creative and bring out applications that are applicable to many users
without having access to a lot of developers or even learn a language, right? I mean,
today, most of the programming languages are what we call high-level languages. What that means is
it's a language that's close to our language, but far away from the hardware, right?
So imagine that code can be generated that is much closer to the hardware.
So all of the craziness that you have to put in a high-level this, you know, again, the generative AI and language model based AI can be that user interface and can really empower a lot of people.
The biggest concern I have is that we may be reversing the tide of Internet democratization of information in general.
What I mean is, if you look back at kind of, you know, early 20th century media, it was very top down, right?
There was, there was three television channels, three radio channels, whatever it was. And,
and you had to be Tom Brokaw to be able to have a message on, on TV. Then we moved to kind of this
more, more internet mediated information flow where people could jump on Twitter or Facebook or
LinkedIn or what have you, and kind of really democratize the sharing of information, right? We started trusting the wisdom
of the crowd. We started having these economies of networks and really kind of opened up where
you, you know, now there's also misinformation. There's a lot of downsides of it too. But one of
the greatest things was anybody could have a voice and kind of jump in and contribute to the
conversation. Anyone could pull their camera out and record something going on, whether it was some individual piece of harassment or a war on any scale.
And then my concern here that I'm getting to is with AI, we may be going back to that top down
kind of funneled. If we're trusting the AI systems and there's only three or four companies big
enough to maintain these AI systems,
then are we running a risk of now putting, you know, we're basically going back to this like ABC, CBS, NBC model,
where there's really only two or three power brokers on the planet, or at least in the United States, to get our information through.
That's concerning. I don't know what the solution is there.
But I think it's worth thinking about the fact that we may actually be consolidating power in these tech giants
that are able to run these massive data centers and run these models.
Yeah, we assume that when we ask ChatGPT a question, that it's accurate, right?
Totally agree.
And I think the problem with ChatGPT and others is that you might get the same answer from multiple large language models, and that might be taken
as the truth as opposed to having multiple opinions.
I think it's still important for people to look at options and figure out for themselves
what the answer is.
I mean, it's obvious when we ask ChatGPT what is one plus one and it says three.
It's less obvious when it's a topic or or vertical you're not familiar with and you take the answer for granted.
Right. That's that's a big issue. So I totally agree with you.
We'll see how it goes, but hopefully there will be enough data such that we will have multiple options when we ask questions. Yeah, it is, you know, what you point out is a classic science fiction narrative, right?
That we want to improve the lives of people.
So we put all of the decisions in the hands of the computers,
and then the computers just make us all slaves.
You know, I think Harlan Ellison should probably sue me for even saying that.
But, you know, it really is a classic narrative and it's something that we should watch out for.
On the other hand, I'm actually pretty excited about some of the smaller AI models and open source AI models that are getting out there that are allowing people to do things on a broader scale and integrate them into their own workflows and into their own devices and so on. Because I think that that's the antithesis of that. If we've all got different models all
running and all trained on different data, then maybe it does help us and maybe it is a tool.
But ultimately, I guess, Chris, as you said at the very beginning, I think we need to look at
this as a tool. It's a tool in our toolkit and we need to, like any tool, we can't just go waving our chainsaw in the air and saying,
I got a chainsaw. We have to think about how we're using it and where we're using it and what
it means to us. So I guess, Chris, I'm going to let you give us the final word on the subject.
AI is my co-pilot. What's your philosophy? It is for sure.
I think that the big thing is,
while Jensen may be right
that the average person
doesn't need to learn how to code anymore,
that we still need to really focus
on those critical thinking skills
because that's what's going to be needed
to parse out all these things
we've been talking about on the risk side.
Absolutely.
And I'm so glad to have you.
That's why I wanted to talk to you today. Thank you so much for joining us. I know that people
are going to want to continue talking to you. And as a content creator, I know you've gotten some
great things, including my favorite podcast in the world. So give us a pitch. Where can we connect
with you and continue the conversation? Yeah, as Stephen mentioned, I am a host on the Impostor
Syndrome Network with the great Zoe Rose. So that comes out every week on Tuesday. You can find that at impostorsyndrome.network. For anything else, LinkedIn is where I'm most active these days for social media, or you can find me at chrisgrundemann.com online. And that website has links to the Impostor Syndrome Network as well as everything else I'm working on.
How about you, Frederik?
What's the latest from you?
Yeah, you can find me as Frederik V. Heron on LinkedIn and helping customers understand AI,
providing them with roadmaps
and make sure that they can introduce AI
in their workflows.
And as for me, you'll find me here on Utilizing Tech,
also on our Tuesday podcast,
which is now the Tech Field Day podcast,
as well as our Wednesday news rundown.
So check out those.
And of course, on LinkedIn and so on,
where I'm writing a whole lot of stuff
about a whole lot of products.
So thanks for listening to Utilizing AI, part of the Utilizing Tech podcast series.
You can find this podcast in your favorite podcast applications as well as on YouTube.
If you enjoyed this discussion, please do leave us a rating or a nice review.
And either way, send us some feedback.
We'd love to hear from you.
This podcast was brought to you by Tech Field Day, home to IT experts from across the technology
spectrum, now part of the Futurum Group.
For show notes and more episodes, head over to our dedicated website, utilizingtech.com,
or find us on X Twitter and Mastodon at Utilizing Tech.
Thanks for listening, and we will see you next week. Thank you.