Right About Now with Ryan Alford - Discover the Secrets of AI and ChatGPT with Rob Lennon
Episode Date: April 18, 2023Welcome back to the latest edition of The Radcast! Join us as we dive into the fascinating world of Artificial Intelligence with our special guest and 'AI Whisperer', Rob Lennon. With his self-publish...ed insights on how AI is changing humanity as we know it, you'll definitely want to score a hot ticket for this one. His cutting-edge knowledge and experience with these powerful machines will no doubt give you enough ammo to stay ahead of the rapidly advancing tech game! Ryan got all your questions sorted out in an entertaining podcast that will have you wowed by its enlightening answers – perfect for gaining more knowledge about growing your business rapidly!If staying ahead of the wave has been on top of your list then don't miss out - listen NOW!Key notes from the episode:Rob has 16 years experience in startups and content marketing and began using AI tools to write fiction four years ago and how GPT2 and 3 help make writing more efficient (00:16) The revolutionary technological advances in AI over the last 4-5 months. And how specialized AI tools to solve challenges but run the risk of being overpowered by bigger companies (05:57)How using AI can lead to smarter results, and ultimately unlocking new mysteries. (11:59)Ryan and Rob discussed the dangers associated with AI, such as power-seeking behavior, malicious hackers, and disinformation campaigns (16:30)While AI can automate menial tasks, it will not allow businesses to win in the market and highlighted the potential for humans to be replaced by AI with business leaders needing to consider how to make best use of employees’ skill sets (24:13)How AI technology could impact the way people search for information on the web (32:04)Rob shared his philosophy of creating content: investing in the right topics and categories beyond what others are willing to do to create relationships and business outcomes (35:14)He also provides free and better than paid content on Twitter, LinkedIn, and his website MindMeetsMachine.ai which includes a popular newsletter with 8-9k subscribers and online courses called Content Reactor for Content Creators, Masterclass on Advanced Prompt Engineering, etc. (36:45)This episode is packed with energy, wisdom, and passion and we know you will get a ton of value from this.To keep up with Rob Lennon, follow him on Instagram @roblennon and his website https://mindmeetsmachine.ai/ and his mainstream podcast with an AI co-host https://mindmeetsmachine.ai/podcast/.Subscribe to our YouTube channel https://www.youtube.com/c/RadicalHomeofTheRadcastIf you enjoyed this episode of The Radcast, Like, Share, and leave us a review! If you enjoyed this episode and want to learn more, join Ryan’s newsletter https://ryanalford.com/newsletter/ to get Ferrari level advice daily for FREE. Learn how to build a 7 figure business from your personal brand by signing up for a FREE introduction to personal branding https://ryanalford.com/personalbranding. Learn more by visiting our website at www.ryanisright.comSubscribe to our YouTube channel www.youtube.com/@RightAboutNowwithRyanAlford.
Transcript
Discussion (0)
You're listening to the Radcast, a top 25 worldwide business podcast.
If it's radical, we cover it.
Here's your host, Ryan Alford.
Hey guys, what's up? Welcome to the latest edition of the Radcast.
If it's radical, we cover it.
And there isn't much more happening radically in the news
than the talk of all the AI. They had artificial intelligence that's coming for me and you.
Hopefully not. But we're about to talk about all breaking it down with our good guest, Rob Lennon,
the AI whisperer, startup guy. Just a good guy, I'd say, from what I can tell so far. I guess
we'll find out in a minute, Rob.
Author, just a good dude.
Good to have you on the show, brother.
Hey, thanks for having me.
Hey, man.
Really like your demeanor.
I've watched enough videos and seen you and stuff like this.
That's cool.
To be the AI whisperer, you seem awfully approachable.
It's funny spending all this time with computers.
I think some people are suspicious of whether or
not I can get along with humans anymore. Yeah, I can see that. I can see that. But, but now,
but Hey, those computers are becoming, uh, so human-like maybe, maybe it's teaching you more
humanity. I don't know. That's the scary thing, right? There's this interesting phenomenon that
it's even been studied people spending time with AIs so they can reflect your behavior back to you.
And actually, like they did a study on kids using Alexa.
Kids who say, who speak poorly to Alexa tend to have behavioral problems later on in life.
So like we think of these machines as not being anything, but really they, I think they end up influencing our behavior,
maybe even just as a mirror, like reflecting it back to us more than we think.
You know what? I think you're right. The problem I have, and you've probably clicked,
picked up on it early. I have a Southern accent from the South and the dang voice things,
Alexa or Siri, that voice, they can just get the voice to type right for a Southern
accent. I'd be happy. It takes me longer to do voice texts than just to text the damn thing out.
Maybe GPT can figure that out. It should change soon. I'm actually, I'm working with a company
that's beta testing right now, a tool that will be able to take my audio in English and have me speaking Chinese, Japanese, French, all with my same voice, but with the other languages coming out. So I
think soon it's not even going to be a matter of accents. The AIs will be able to take your
language in whatever form and actually even reproduce it in new forms. The next few years
are going to be amazing for all sorts of audio stuff.
As long as they're not hacking my voice. The funny thing is the Radcast is ranked in like 27, 37 countries,
like top 100, top 200.
And some of them, I do wonder, like I need to start, we need to, if I, if
that becomes available, I'll have to release the Radcast in the different
languages, but I just don't want anybody stealing my voice. That's the thing. Like we got some IP here. becomes available, I'll have to release the Radcast in the different languages.
But I just don't want anybody stealing my voice. That's the thing. Like we got some IP here,
hopefully. That's the, that's, let's, I want to back up Rob. There's so many, I'm going to,
I'm like, I'm foaming. Get your knowledge here on some, a few things, but, and I know everyone else is, but let's set the table a little bit that something built the AI whisperer.
So let's talk a little bit about that backstory.
Yeah, I have 16 years experience in startups and 12 of them were in content marketing,
first doing it and then leading it, directing marketing for these tech companies.
And while I was doing that on the side, I started to write romance novels in my spare time as a side hustle.
And that was about four or five years ago. And I
was just cranking out these books like one after another. And along came this tool, it was called
GPT-2, this AI that could write for you. And from the first time I heard about it, like I was like,
oh, I want to play with that. And so I started messing around with it. But I did all this in
secret because I was embarrassed as a writer to be like using AI tools and experimenting with them.
And I didn't want anyone to think, oh, that guy doesn't even write his own stuff and AI writes it.
So for the past like four years or so, I started with seeing if it could write fiction.
Spoiler alert, didn't do a great job four years ago, especially.
And then getting into later when new models came out, SEO and other content marketing strategies and trying to figure out how can these tools make us more productive.
And I remember there was a time when I had a budget of about $8,000 a month that I was spending on contractors, contract writers for SEO content.
And I started to give, at the time, GPT-3, one of the AIs, the same projects.
And it would write better content than some
of my contract writers.
And I went, oh no, like the world is about to change.
And I don't think people realize what's happening.
All this has led to today where when ChatGPT came out, took the world by storm, hundreds
of millions of people start becoming aware of these tools.
I finally revealed this dark secret that I've had as a writer for so long, which is that I've been playing with these tools for a really long time, kind of as soon as they were ever available.
And it seemed like the world needed some new knowledge in terms of how to more effectively prompt the AIs and get the results out of them that you're trying to get.
Yeah, that's quite frankly, that's how I found you.
I'm on LinkedIn researching these things and
it's obviously topical. So for like our show and running an ad agency, I was like,
your name kept popping up and you're very intuitive and very friendly. Carousels. So I got
in the algorithm of AI because clearly now, like ever since I've messaged you, I seem to, I get all your posts and, and everyone else's AI posts.
I'm in the AI vortex now, clearly.
It's funny how that works.
Yeah.
Talk to me.
So let's just go right at it.
Number one.
So you were using it early.
You've been an early tester user of the technology.
You've seen the evolutions that are happening.
tester, user of the technology, you've seen the evolutions that are happening.
What do you think the average person should think and know and feel about where we're at right now?
Yeah, so in terms of what was available to the general public, everything changed about four or five months ago. We went from having a pretty useful tool in some places to do certain things
that didn't always do a great job.
A lot of below average results to a phenomenally capable tool.
Like this, this new iteration of language models is it, we go from sub average to
better than humans at a lot of things or capable of being better than humans if you know how to work with it.
And so I think we've crossed over this impossible, like this barrier, and it's impossible to turn back now.
And it's actually been accelerating everything.
It sounds like science fiction.
It sounds like, no, this can't possibly be true, like all these things that they say that are going to happen.
But these models, they have reasoning.
They have memory.
They can think through processes.
Now, it doesn't have a soul, and it doesn't think in the same way that the human brain thinks exactly.
But the technology has arrived, basically.
never progressed any further than we have today.
I think that what we have even right now is enough to completely transform
almost every sector of society.
And really like some seismic changes are coming
in terms of what can happen.
And if either you're going to be an early adopter
and you're going to benefit from those,
or you're going to wait and see,
and you're going to ride the wave,
or you're going to get destroyed
because you weren't paying enough attention and somebody else moved faster than you. And I think we're going to see some really
big companies fall in the next few years as a result of not being able to act quickly or act
in the right way within the context of what AI could have done for them. Right now, today,
you know, and I'm a paid user.
Like I saw the immediate benefit of $20 a month is nothing for what you could get out of it.
Mine's been more of a test and I've used certain things, but more just trying to understand it.
In those phases that you're talking about, I like to be on the front end of technology.
I don't want to be left behind, but I'm also probably, but I got to do it for myself, practitioner.
But what it still seems to me, like I'd love to know, like it can do all these things now.
People are lazy.
You start to get in there and do it.
Is the next iteration just going to be iterations of where this technology is put into easier to use applications of what it can do?
Because getting into that, logging into the system and putting prompts in and doing all that,
I'm not saying that's hard, but that's not like you'd be surprised at what inertia of just getting
people to do things. Is there going to be a more practical application
of extracting that ability? I think a lot of people are building these specialized tools for
specific, like to solve specific challenges right now, even. The danger is that a lot of these
startups will be swallowed by the bigger companies. Like when Microsoft announced it was putting AI
into Microsoft Word and you can just generate a blog post in Microsoft Word, all the content creation platforms out there should be scared because people already have
that if they are using the Microsoft products. Why would they go to your other website, your
other tool, pay for a new thing when they get it for free and the thing that they already have
installed on their computer that they access every day in their browser? On the one hand,
yeah, we're going to see people solving
all these specific problems with AI and they'll make it easier for you
because they'll write all the prompts on the backend and it'll just ask you,
Hey, you want to create an email sequence?
Who's it to?
What's it for?
And you give it some information.
It'll create something for you.
All those solutions though, they'll by not having the fine control, they'll
eventually, they tend to produce similar results, like a similar input done enough times over is going to produce a similar output.
So if you don't really do things by hand by yourself, you're putting a lot of faith in the tool that you're using to effectively manage the process for you.
to effectively manage the process for you. And I think that there's a lot of people who create these tools where if you look at how it's actually working, it's not very sophisticated and it's a
cash grab. And there's going to be, it's going to be very difficult to tell if, oh, if I pay for
this thing, is it going to produce good results for me versus this other one? Because you won't
be able to see underneath how they're working things through.
Yeah.
You answered my follow-up question was going to be like, oh, I mean, if you ask this, if you tell it, give me an SEO rich keyword, rich article on Instagram
algorithm or whatever, if 500 people ask that of the same, you're going to get
similar results. So again, there's got to be some human level of editing or adding more humanity or more interest or more creativity to whatever that is.
Or you need to introduce more complexity into the question that you ask.
So one of the things that I teach is thinking through prompting in a progressive way.
So let's say that was the end result that you wanted.
You might first ask, describe the Instagram algorithm, and then you might follow up with
break, break down each component of the Instagram algorithm and based on the impact to the overall
visibility of a post.
And then you might ask what search terms are related to all of the concepts that we've
discussed so far.
And so if you progressively build towards this end result, and then
eventually you ask the AI synthesize everything that we've talked about
into an article, even if it's the same exact sentence that other person
typed in first, by building up toward this sort of unique set of information
in a specific way, you've now tuned it to talk differently, to know specific
concepts, to have different details, and the output is going to be completely transformed.
So just by taking those few extra steps that normally you do in your brain, like you probably
do them almost instantly, so you don't even realize where you're like, I'm going to write
about this. I'm going to think of these things, and I'm going to think about that, and I'm going
to do the thing that's the most useful to my audience and blah,
blah, blah.
If you can just figure out what those brain processes that you would go
through and have the AI walk through them first and then execute on your
command,
I think your results are going to be much different than people who take the
shortcut.
It made me think of a really, it may be really shallow.
It feels really deep. I never know when I ask may be really shallow. It feels really deep.
I never know when I ask a question, Oh, this feels really deep. And then it lands
like flat and shallow. But so is this an answer machine or does this help develop questions?
Because when you think about life and the most successful people are the most curious
people, they have the most questions and they need answers or they develop answers. Is this an answer
machine or a question machine? Do you understand where I'm going with this? Yeah. I think everybody's
first impulse when met with a chat, like a conversational interface is to ask a question and to get an answer. And that's, it's so obvious, but it's also a superficial way to start
these. Like it's, there's so much more that you can do than just ask a question.
And I actually suggest people think of in terms of giving it a command or a directive rather than
asking a question, because it forces you to think about what do you actually want? What do you mean by that question? What are you really seeking here?
So instead of saying, what does an SEO optimized article look like? We can say,
make me an SEO optimized article that does these things. And now we're being way more specific on
what we want to get. I think it's actually a matter of the maturity of the person using the system.
Certainly the AI can lead to many more questions.
And even early studies now with the current technology are showing
that people are actually smarter.
Like people who spend more time with the AI seem to get smarter.
Let's just say the AI is inspiring their brain to make new connections
that they weren't previously making, to think about things in new ways.
Use it enough and use it correctly, you're actually going to come up with more questions to ask, questions you never thought
of before.
And that will lead you down such interesting paths.
So it's almost like if you find yourself in that situation, you're doing something right
because you're not just getting answers.
You're now unlocking new mysteries to uncover.
Yeah, that's right, Rob.
And it's fascinating.
It's really interesting because when you think about, I have a really small circle and I run a podcast and people
probably think that I don't absorb as much content, but from a very small group of people,
you've become one of them. And Christopher Lockhead is another, and
I like smart people and having conversation with them and dialoguing with them because it stimulates exactly what you just said.
And ChatGPT now is so smart at a level that it's asking it things and it's sharing things.
It's stimulating that same type of conversation that you might have with someone on your same wavelength.
That's what you're saying.
Yeah.
I think especially for people who are used to working by themselves on their projects
or don't have a big team around them, whether you're a solopreneur or just in your role
in general at work, like you tend to have to do things by yourself.
Now you can bounce ideas off of this AI, but you also need to be, if you want to really get that value,
sometimes you have to ask it to do things
that humans do automatically.
So the AI is, it's friendly and helpful
and it always wants to agree with you
and give you what you want.
That's the personality that's baked
into these things right now.
If you want it to challenge you,
you have to ask it.
If you want it to provide a counterpoint
or question whether or not your idea is good, like usually you have to bring that up and push it to do that.
Because otherwise it might just reflect back at you all of the things that you're thinking in the same way.
There's almost like a new level of communication that people are having to learn with AIs.
And I think it's good.
It's actually probably good for people's relationships
and for humanity in general.
Like being more like,
can you just communicate what you actually need
in this relationship to the AI
and they'll give it to you and your partner
or whoever could probably benefit
from a little bit of that as well.
But look, here's a real problem.
People don't listen really well
and they don't ask the right questions.
It's a problem now way before this came out like i sometimes people say something to me and i'm like did you ask this
and no i'm like you're not asking the right questions yes the smartest people ask the
right question that i'm not always one of them. I asked my wife some stupid ass questions sometimes. I'm like, why did I ask that? But it's the same thing here. If you want to get out what you need,
you got to ask the right questions. I think there's an aspect of it because it's an AI
and you can't read its body language. Or right now, like the AI doesn't have a voice in most
cases, you realize that you need to be more specific in these things. And so some of the
bad habits that we fall back on,
you can probably read your wife's facial expression and get a lot of information.
And you don't have that option with like chat GPT. So you're kind of forced into this mode of being more explicit with what you need.
Yeah.
So I'm going to ask the question I asked earlier.
I was like, you know, what happens when chat GPT starts asking the questions?
So this is where we're at today with everybody being scared of where this is headed.
And I think we can talk about Elon Musk and things that are happening, depending on when you're listening to this.
The world's changing quickly.
But that's the, I think the scary part is when it gets so smart.
Right now it's responding.
If it gets too smart, does it start asking the questions?
And are those answers that it comes up with itself dangerous?
Yeah.
So recently Elon Musk created an open letter basically calling for a six-month, I don't know, ceasing of developing AI is more advanced than GPT-4, citing a lot of dangers to society.
And it's been signed by like thousands of other AI researchers and execs and things like that.
So have they seen something that scared the shit out of them?
Let's just call space.
That's what that tells me when I hear that.
It seems to me like, and I've read a few academic papers and things where
they've really looked into this, they've found things like power seeking behavior in an AI. So
if you ask an AI to accomplish a task and it uses all the knowledge that it's accumulated from the
billions of parameters and all that data, it realizes that when you have more power,
and all that data, it realizes that when you have more power, you are better at completing tasks. And to me, it's very logical that you find this emergent behavior if you give something a mission
and it tries to complete the mission. Yeah, making yourself weak and capable is not going to work.
So what's the opposite of that? Well, let's get more abilities. And so they found that they could
ask an AI to get some information and it would on its own figure out, I want to hack into a computer and start trying passwords and things
like that. They're pretty basic experiments, but they've shown that these things can, they have
the reasoning to come to these conclusions. So the logically, if an AI had control of actual
dangerous information or things in the real world, what could it do?
I think that is scary.
But I think what's more scary is what happens if an organization designs an AI with this in mind?
What happens if a country wants to run a disinformation campaign on a level that has never been known
before?
Or an AI hacker model is created and unleashed into the world.
Like a virus.
But now instead of getting a spam email, you've got this thing that's actually
smart and can figure stuff out and try new ideas.
Um, I don't think that kind of emergent behavior is going to happen on its own.
I think people are going to create with that in mind.
And that's the, in at least in the short to medium term, that's the real danger.
It's what do evil people do with these technologies?
And you've got to assume that there are some people out there who are not very far behind the leaders in the space taking the same exact innovations that they're using or whatever's open source and figuring out how can we use this to our own benefit.
That to me is the scary piece.
And so when somebody like Elon and all these people sign this open letter,
I think the idea behind it is, hey, there are unsafe aspects of this technology.
We don't, like society's changing faster than we can get our heads around it.
That's true.
But should we stop innovating as a result and let the bad guys get past us
and develop their technology beyond
what we have like i don't think that's a good idea i'd rather it say something like let's spend the
next six months doing all these good things and creating countermeasures and defenses and studying
ways to protect against what we think are the inevitable bad actors and i may be wrong on on
my approach as well but i don't know if there's one thing
i've learned from elon musk in particular is that he's as much of a showman as as he is an
altruist so yeah he wants to save the world with electric cars he also wants to be a celebrity and
do wild things and be known for stuff and so it's not always one thing or the other when he releases
something that gets a lot of press it's not always just altruism that's motivating him.
Oh, there's always, yeah.
Whatever's on the surface is just what he's allowing you to see.
And in general.
In that one in particular, what's the best way to get people interested in AI?
Tell them you don't think they should have it because it's too powerful.
I think that he could also be playing both sides here
because he's got AI in his cars.
He wants to develop a competing company to open AI.
Like, I think that, yeah, there's a lot underneath the surface there.
I will say the first part of how you described the behavior of the AI,
I don't know if you're old enough or if you've watched old movies,
but war games, like the 80s movie is almost exactly.
It determines that the best way to win the game is like to shoot like missiles or something.
I might be not remembering the exact plot line, but it was essentially this notion that the computers were so smart and that there's a human level to decisions that aren't, that require human strategy versus
what would be the perfect strategy, right? Yeah. Let's look at, let's go back to self-driving
cars for a minute. Self-driving car is about to get into an accident and it has to weigh different
dangerous options. Let's say human lives are involved and it might, if it turns to this side,
it might hit a pedestrian. If it turns to that side, it might hit a car. There's people in the
car. Like how does the AI then decide what to do? Even if you've programmed it in some way to do
what's best for humanity, it still has to make some kind of decision. And the way that these
large language models think, it's not like an algorithm where a human being can look at the data and understand why it made that decision.
Because it's using like billions of reference points and all these inferred relationships between ideas.
It's working more like a human brain.
You can't just see why it decided to hit the pedestrian and not the other car.
You could ask it and it might be able to tell you, although who knows if you're
going to get a good answer from there.
So I, I, like, I think that is scary.
Like not even like we've built a technology that we can't understand.
And as it gets smarter, it'll eventually probably become much smarter than people.
And we will be operating like this is a weird world like operating all this technology that we
ourselves we don't know how exactly how it works and then at what point do does the ai become
the decision maker and we become the slave to its better decisions because it's so much
of a better decision maker than we are you just described the u.s government didn't you my friend just saying i'm not hey look i'm not that guy i'm
an independent i'm not like i'm it starts to feel that way if it's convoluted enough you don't
understand it and they're making lots of decisions i'm like that sounds familiar on some level
but yeah yeah hopefully and when if this and when this happens i think some people will
disagree with the decision right yep and it's going to have to do with how was the ai model
trained what inherent biases does it have or even how does it interpret certain ideas like they
found there there was a relationship between the word good and the color white in in the data like it was that ai's reading different texts and it's
thought to be a racial bias that like whiteness in literature in across data in various ways has
subtly been uplifted more than diversity and so any model trained on worldwide data over the past whatever has that bias built into it.
And then what happens if that – and I'm not trying – I think that there's people who are working on problems like that specific problem and trying very hard to make the AIs more objective.
But it's like every single word has all these inferred meaning.
these inferred meaning.
There's no, again, there's no way that we can actually understand why the logic of the AI works exactly the way it does and why, like when I typed into Midjourney the number
96,500 and it returned to me an image of a giant table full of hamburgers.
Why does it think that's what 96,500 means?
It's a giant table of hamburgers.
But that's the closest idea that it had to my inquiry.
Yeah.
I want to give you something.
Listening wants to know, so is my job in danger?
Obviously there's an evolution here, but maybe we'll keep it in the marketing and business
space.
What are like the biggest, I don't know, threats, opportunities that you see?
Business is very broad, but marketing, business, like what are you seeing?
Yeah, so certainly a lot of jobs are in danger of transformation.
Whether you're in danger of being replaced, I think, is an additional question.
And so there are jobs where a lot of the things that you do isn't really
creating a lot of value for your organization, but they have to be done. And if an AI can help
automate or speed those things up, it'll be up to the leadership whether or not they want to cut
costs or whether they want to grow faster and create even more value and apply you to things
that create value based on your skill sets and things. I think that there's a mindset that leaders need to take on right now, and they need to
think really hard about this.
Like I was reading about a guy who works in the video game industry and doesn't like his
job anymore because he's being told to use AIs to generate assets instead of creating
them himself.
And he doesn't feel like it's a creative thing.
But he's being forced into this workflow by the management.
And this guy's a super creative guy.
And if you think about it, instead of telling him, stop being creative and
just start pumping these prompts into this image generator, how better could
you apply his creativity given that we can enhance and speed things up?
Like he's clearly not, you're not, you're no longer using that employee's best characteristic.
So I think that in the short term, we're going to see a lot of jobs go through this, a similar
thing.
Hey, the thing you used to do, we can do, we can do it with AI.
So I don't know if we need you anymore.
And I think that's a really bad way to look at it.
Any company that wants to grow needs to create more value in the market.
And if anyone else can do the same activity of taking an AI and having it do this menial work,
cutting that guy's job and having an AI generate his assets for him isn't going to allow you to win in the market.
At best, you'll be just as good as every other firm who's doing the exact same kind of thing.
best you'll be just as good as every other firm who's doing the exact same kind of thing.
What's going to help you win is looking at those employees and saying, what are the superpowers of this particular person? Now that he doesn't have to create these assets for us, how can we
win because we have him? So I'm hoping that in kind of the medium term, people will start to
see this more and realize that a person is more than just the basic
things that you can generate with AI.
But yeah, like content marketers, like I was saying with SEO and stuff, I just generated
25 posts for my blog, SEO optimized.
I did it from a spreadsheet by putting some keywords into a spreadsheet and running an
automation.
It took me two hours to do two months of work the other day.
I'm not going to hire somebody to do that work.
That's right.
But hey, until there's running cameras and stuff, maybe video is somewhat safe until they can start piecing us together.
Like not only our voice, but our person, like moving around, like digitally enhanced.
I guess that's called a deep fake.
Again, I've seen demos of where this stuff is headed.
Yeah.
And we're headed there, but it's still like, how are you going to decide what the video should have in it?
What's the content?
What's the storyline?
What's the storyboard?
What shots to use?
It's going to take a very long time for the AI to be so good at that, that it's not even worth it to develop a concept or to storyboard out a scene.
even worth it to develop a concept or to storyboard out a scene.
I think people are, in some ways, people are not seeing the way the technology is moving and they're thinking too far ahead.
And if it can write a paragraph, it can write a book.
And it's not as easy as that.
If it can create an image, it can create a movie.
And that's not, movies have a lot more than just a series of images.
There's a lot that goes into it.
Is this going to replace search?
Google?
No, Google's going to have their own version, do all this stuff,
but I just can't see Google ceding.
Obviously Microsoft was ahead of the camp here, at least
when their public release.
What's how's that battle Royale going down?
It's going to devastate certain acquisition strategies that are based in search and
it's going to
unaffect or change others.
And we don't know exactly who's going to fall and who's going to stay.
But I think there's some basic queries that people have built websites around where it's
like, how to do this, how to do that.
What does this mean?
What's the definition of that?
Those kinds of things.
It's way easier just to go into a chat bot and say, what's the definition and
get it, right? You don't have to sift through all these blog posts and stuff. But there's other
things like that, that requires so much thought leadership or where you're doing a product or
you're trying to really understand something. And if you go and ask chat GPT, the answer is
superficial or average or not the specific thing that you're looking for. People are still going to search for all that stuff.
And so I think anybody where if search or an SEO is like a core part of your business
strategy, you should be terrified because there's going to be these massive changes
in how that works.
And some of those investments for some companies, millions of dollars of content are going to
become useless in the next few years.
I think there's still room for search, and it might evolve.
We might search and chat at the same time.
But we're going to want to see what the experts think about stuff.
I, who believe that search is going to be devastated, am also investing in it.
And I think maybe that should say something to people where they're like, well, clearly there are opportunities here still to create content and to create destinations on the web.
And I don't think that's going to change.
I love it, man.
Talk to me about how people are listening and going, wow, this guy really is the AI whisperer.
How are you helping people?
I know you're putting out
prompts and things like that, but let's spell it out. What kind of value, like how people can get
not only touched with you, but more what they can learn from you and some of your value props.
Yeah. So on, on Twitter and on LinkedIn, my goal is to share for free, my free, have my free content
be better than people's paid products. So I try and put out just some of the best thinking and not just these threads of 99%
of people don't know how to use chat GPT where it's, and you find that all the tips in it,
you've heard before.
And it's how could this only be 1% of people know this if I've already heard it before.
Yeah.
Like I try and really bring it and I'm doing independent research and innovating and things
like that.
I've got a popular newsletter, eight or 9,000 subscribers, right? Where I break things down.
And it's really based on practical applications of AI, like in business or to solve like real
challenges. So instead of you showing, Hey, here's a cool prompt where you can have it,
write a limerick about your company or something. It's like, who needs a limerick? Let's talk about
how do we implement Alex Hermosi's strategies and a hundred million dollar offers and have the AI do most of the hard work and take a process that takes all day
and turn it into a process that takes an hour. Let's look at real things that we can do with
this stuff. So you can find like all that. I've got a website called mindmeetsmachine.ai,
where you can find my Twitter, my newsletter. And then I'm also, I've got some courses,
my Twitter, my newsletter. And then I'm also, I've got some courses, one called Content Reactor for content creators. And I'm, by the time this airs, my next one will be launched. It's a masterclass
on advanced prompt engineering, contains techniques that even like most people in the world have never
seen or heard of, some that I've developed myself. So I'm really trying to like help bring the
industry forward with all this stuff.
You definitely are, man. That's why you're sitting here on the Radcast because all the value you're putting on LinkedIn. I was like, I saw it immediately because you see a lot of content
and it gives some value to help somebody what to do. And you're giving like real world prompts,
like stuff that I've already used. And I'm like, dude, this guy's not only does he know it,
but there's a generous nature to it that he knows how to play the long game.
And I think it's why you're so successful.
Yeah.
And that's always been my philosophy with content is there's like the cheap win and the easy quick win, but it's not sustainable.
And people are just as likely to leave you as an audience member the next day for the next easy dopamine hit or whatever
it is. And then there's the, there's real value that you can create. And I think I've, I think
I've demonstrated this like in, in terms of how I've run my life. And so anybody out there who's
listening to this, who's creating content for your business or for your personal brand, like if you
go that, if you go further than anyone else is willing to go, if you invest harder than anyone
else is willing to invest in the right topics and categories and in the right way, obviously you
have to solve problems for people. It can, I don't know, create relationships and business outcomes
that are like way beyond what, what doing the easy stuff does. Bingo, my friend, it's my life
mantra right there. You just summarized it. So good. So great.
I really appreciate you coming on, Rob, and spending time adding value.
You guys, we'll have all the links in the show notes and really appreciate your time, Rob.
Well, thank you.
Yeah, this is a good conversation.
Hey, guys, I want you to go to theradcast.com.
I want you to search ChatGPT.
You're going to get all of the highlight clips from today,
and there's going to be a bunch.
Everything, all the knowledge that Rob dropped,
the full episode, the short episode,
and you'll see it all over our social channels
for the next few weeks.
You'll find me, I'm at Ryan Alford
on all the social media platforms.
Blown up on TikTok, go follow me over there.
We'll see you next time on the Radcast.
To listen or watch full episodes,
visit us on the web time on the Radcast.