Heads In Beds Show - Building Prompts For Better Responses From AI Tools
Episode Date: April 2, 2025In this episode of The Heads In Beds Show, Paul and Conrad talk all things prompts, AI, how to get the best output for various marketing tasks and a LOT more...⭐️ Links & Show NotesPa...ul Manzey Conrad O'ConnellAnthropic Prompt Generation ToolConrad's Book: Mastering Vacation Rental MarketingConrad's Course: Mastering Vacation Rental Marketing 101🔗 Connect With BuildUp BookingsWebsiteFacebook PageInstagram🚀 About BuildUp BookingsBuildUp Bookings is a team of creative, problem solvers made to drive you more traffic, direct bookings and results for your accommodations brand. Reach out to us for help on search, social and email marketing for your vacation rental brand.
Transcript
Discussion (0)
Welcome to the Head to Med show presented by Buildup Bookings.
We teach you how to get more vacation properties, earn more revenue per property, master marketing
and increase your occupancy.
Take your vacation rental marketing game to the next level by listening in.
I'm your co-host Conrad.
And I'm your co-host Paul.
All right, Paul, the director's cut version, take two, let's try. How's it going? level by listening in. I'm your co host Conrad. I'm your co host Paul.
All right, Paul, the director's cut version take two, let's try that. How's it going? What's going on?
Take two, take two. Let's see here. We'll go over what we went
over just now. Didn't talk sports before. So now we've had
like two minutes of pre recorded sports and then my computer
decided to say, Nope, let's try it again. So happy opening day
to those who celebrate
for those who are listening to this then in April
when the twins and Red Sox are respectively
wherever they need to be.
Could you imagine if they were actually both good?
That'd be entertaining.
That's, I mean, can't happen.
Red Sox will have their year this year.
They'll be okay.
They'll be okay.
That's my dad's assessment.
Okay to good. That's fair dad's assessment. Okay. That's
fair. That's fair. I think that's fair. So yeah, I mean, it's this is this is the time of year that
I can say I like I like this time of year. March madness, golf, everything is happening. We love it.
But how are you doing? How are things going? Yeah, pretty good. Can't complain. Minimal technical
issues here. So I will let's let's hope that continues, I guess, for the next little bit. Yeah, just um, it's been a, you know, it's been a little
bit of a week for me to catch up slightly. I've had meetings this week, but I would say
for whatever reason, just a little bit of a calmer week for me. So maybe I think some
of our clients get busy with spring break. And I think that will take their attention
away from marketing. And it is funny how sometimes clients disappear, you know, during during
certain periods. And I have people who I talked to, who don't really understand exactly what
we do. And I'm like, our work is kind of done in advance, our
work is done ahead of time almost, right. So it's like,
once Fourth of July is here, there's nothing we can do to
actually influence the Fourth of July occupancy, we need to be
Fourth of July will be baked, you know, and how it's going to
perform at some point in the near future. And then it's up
to us as marketers to focus on the next thing. So when our
clients get busy, sometimes that means they, you know, gives me a
little time to catch up or work on things. So it's been pretty good. I think we have a few new
interesting buildup bookings initiatives in the hopper. We're actually trying to finalize some
partnerships with some of the large PMS companies, which has been an interesting process to learn
about what they perceive partnership to be or how we could potentially become partners and how they
can send some customers our way and hopefully vice versa. So it's been a good, it's been a good time
today. But this, this, this episode here, don't have a great turn,
just spent on the docket for a while
and something that maybe we could almost update
our thinking on this from time to time.
But I think now, it's hard for me to count.
I would assume the number is into the thousands
at this point of number of prompts that I've done
into a LLM tool of some flavor.
I know we've talked on a previous episode
about kind of the one that we've chosen.
I've been more in the chat GBT, team chat GBT,
you've been more in the team Gemini camp. But I do think that some of the principles we'll talk
about today apply to pretty much any of these LLM tools, including Gemini chat GBT. I'm using
Claude a little bit more lately with the book that I'm working on version two, which has been
interesting. So yeah, the goal today is to kind of talk about what it would look like to build a
prompt library inside of your business. And then I have some very specific examples, and a link
that I could put the show notes to things
that we've already begun working on
and things that we've learned building these prompts,
because I think ultimately the quality of the response you get
is pretty strongly dictated by the prompt
that you use with these tools.
And I see people who say,
the stuff I'm getting out of the AI tools isn't very good.
And I think probably 75% of the problem is
they're not giving it the right context
and they're not even getting a good prompt.
So I was kind of curious kind of what your process has been like learning how to use these tools,
because I feel like we're all just learning together and what you've explored so far with prompting.
I think one of the reasons, one of the big reasons that I haven't gotten as much or didn't get as
much out early on, get out of the AI and the LLMs is because I didn't know how to write a prompt.
And it does, it takes some time. I think
the one thing that as we all jumped in and tried to learn how to do it on our own, we
forgot that there's actually some really good resources in helping you present the right
information so that you are educating and teaching the LLM to give you not your desired
results because I think that that's a weird way
of putting it because sometimes you're going to get an answer that you don't want to get.
But I do. I think that that's probably one of the things that I've had to work on the most is just
being as specific as possible and not treating it as like a quick conversation, a really good prompt is probably not going to be a single sentence.
It's probably not going to be a true dialogue with the LLM. You can get there and at some
point once you've educated and once it's learned your voice and gone through all some of the
things we're going to talk about here, it's going to give you the output that you're looking
for. But that was my biggest struggle. And I think in ChatGPT, you need to write better prompts. I think Gemini gives you a little
latitude, maybe just with more of the updated information, more of the real-time information
that's kind of always been part of Google's side of things is that they've always had fresher data
that they're pulling from. So while it makes a prompt still very important, not as
critical to getting that output. Whereas ChatGPT, if you give good data in, good data out, if
you give ChatGPT some good data, you're going to get some really good output. So we're kind
of going through the rules, going through what we should be doing, what we shouldn't
be doing. And I think part of this is going to be things you should be using chat GPT-4 or an LLM-4 and things you should be using an LLM-4. And I think
there's definitely some areas still where LLMs are not a better solution than
a human. That's not always the case or it is. There's some complementary,
supplementary actions there. But yeah, I think overall this is, it's still a, I mean, we will
probably look back at this episode in six months and say, well, that's obsolete
or that has nothing to do with this. I mean, I think that's the...
It could happen in six days, honestly, the way that things are going.
I'm probably giving it too much time with six months there, but it is. I think
that's where it's important to just kind of keep on learning, touching, feeling, testing, iterating.
And we talk about this in a lot of things we do in marketing, but even more so with the progression that we see from these LLMs that you do, you need you, you got to keep learning because they are learning and then you have to be able to catch up and make sure that you're delivering still on the high quality and taking advantage of what's out there.
Yeah.
Well, just to hammer your point home
about how quickly things change, we're recording this,
as Paul mentioned here, on March 27.
And basically two days ago, Chaz UBT completely,
I would say, fixed, if that's a fair assessment,
their image generation.
They've been able to generate images now for,
I think, since launch or very close to launch, with Dolly.
But some of these images were strange to say the least.
And obviously there was a long time period there where like,
you would ask them to generate a picture of a person
and the person would have like nine fingers.
And like, you know, there was glitches, you know, in the code.
And it was kind of like, man, I could see where this is going,
but like it's pretty far off.
Like I'm sure it'll take a while for them to get it right.
No, they got it right now.
Like if you go into a ChatUbd 4.0,
you ask it to make me a picture of a male model hanging out of the
beach in front of a luxury vacation home. It looks incredibly realistic. I mean, yeah, you can tell
if you study it almost that it's not a real picture, but if you're just scanning it quickly,
no longer do we have nine fingers, no longer text on the images now works perfectly. That was not
the case a week ago. You would generate an image and some of the words would be wrong, some would
be right, or letters I should say, excuse me, the word would be right and wrong. And now I think ChatTPT's got
that dialed in. I don't know exactly what changed. Maybe there's someone who can articulate that or
explain that, but I don't exactly understand it fully myself, to be honest with you. But now,
I think it's pretty easy to mock up an ad using a prompt and ChatTPT. That was definitely not the
case, like I said, even a week ago in here, we stand today and it's pretty much, you know, works pretty well in that respect.
It's a little bit slow, but the quality you get is honestly,
that's what I'm keep noticing, right?
Is like all the people that provide like very inexpensive,
very low quality services,
I feel like have to just be getting ruined right now by chat to BT
or what I will say is that they can actually raise their bar a little bit.
And I kind of have these like two thoughts in my head at the same time,
which is like people in Twitter, I think, tend to overreact to this kind of stuff. They're
like, oh, like designers are done, or, you know, we don't need programmers anymore, or stuff like
that. I don't agree with that assessment. I think that's a little bit too, the pendulum swinging too
far the one direction on those sorts of like deterministic statements. I think it's more so
what you're going to see is like, if you're having like a low wage person doing a specific task for
you, there's going to do it a little bit better now than they could before if they're using ChatGPT.
If you are one of these overseas people
that makes $5 to $15 an hour US
and you learn how to use ChatGPT,
you still can find a lot of value
because you're basically saying,
I'll run it for you, I'll give you all deliverables,
I'll put all the files in the right spot,
I'll schedule the social media posts,
I'll do X, I'll do I, I'll do Z.
And until we have everything end to end with AI,
which I don't think, I don't know if we're actually Z. And until we have like everything end to end, you know, with AI, which I don't think,
I don't know if we're actually gonna get there.
If we do, it almost has me worried
for like the quality of content on social media, right?
If you could just like literally click a button
and just be spamming out hundreds of images a day,
that kind of almost feels like a, you know,
a negative direction we're heading in almost.
But yeah, I think that the quality that is coming out
has changed drastically in the last week for images.
To your point, who knows where it's gonna be six months
from now, it's hard to know, but we could do
an updated version perhaps.
I was gonna say the same thing in Gemini released something
that must have been, they must have timed that out
because Gemini released something very similar
on Flash 2.0 and Gemini has made an,
a very, I would say an important effort.
They messed up images bad.
And Gemini really didn't do images well initially there.
That was not good.
But, and I think they took them out completely.
Or for the most part,
you weren't able to generate a lot of images.
But Flash 2.0,
I was doing some testing with that earlier this week as well.
It's night and day difference as well.
So when we think about images,
well, the next step is almost certainly video.
And then where does that go from there? And then how do we, so here we go. Let's jump
right into it and talk about what we're gonna get into.
One more little dust up before I jump into it. So I put it out a LinkedIn post the other
day actually using Gemini to edit a sunset into a vacation rental picture. And there
were some, there were some people left comments that weren't the most friendly and positive
basically saying like, oh, this is crazy. This is false advertising. You know, the sunset didn't actually happen.
You know, and I'm like, here's the analogy that I've given to clients about editing photos, right?
I think if you take a real photo out of a camera and edit it heavily with respect to things like
clouds and sunsets and sunrises and things like that, the analogy I've been doing is like,
it is no different than if you are Victoria's Secret and you send out a model onto your stage
and you make sure that model has the makeup sprayed on perfectly
to her face or her body or something like that, right?
Like you could argue, hey, that's false advertising.
That bikini, that thong doesn't look the same on, you know,
insert woman here as it does on the model
and the Victoria's Secret model walking on stage.
I would argue that they're both right and wrong at the same time.
No, you're not going to look the same as the model,
but the point of the model is in a way to kind of show it in its best light
or like show that product in a way that people will desire it.
And of course the same thing happens.
Go look at a male news anchor who goes on TV and reads off the news.
He's going to go into makeup and hair and get his hair and makeup done properly.
If I see him, if he wakes up in the morning,
does he look exactly the same way as he does on TV?
No. Is that false advertising about his appearance? Again, yes and no. I
guess that's kind of how I feel about some of these AI edits and things like that. We're
putting makeup on an image for sure. And it's now easier to do that with AI. But to be clear,
this is already happening with existing, you know, manual edits and things like Photoshop
and stuff like that. I think at some point, we all have to draw a line what we're comfortable
with and what we're not comfortable with. Maybe some clients would say or some people
would say, hey, editing color and changing color and
brightness a little bit, that's reasonable, but changing the photo to be completely, you know,
an AI generates sky completely is a little bit misleading. Okay, like that's a reasonable line
to draw. I'm not going to argue too much with it. But I also would say it would be foolish to like,
ignore these things, because I think if you ignore them, you're going to find yourself in a tougher
spot when you realize your competition's using it and they're getting the attention, they're
getting people to click on their thumbnail on Airbnb
much higher than you are, and then you're left behind
because you weren't willing to do some photo editing
and photo manipulation.
But I totally understand people having some kind of set
ethical standard or what they feel is fair and not fair,
and then not wanting to go past that.
I just think it's my personal experience
that you've got to be careful about that side of it.
So yeah, that's my thoughts there. But yeah, it's a rapidly changing world to your point, Paul.
And if you're not paying attention to what's going on, and your competition is,
I do think it puts you in a bad spot. So number one, let's talk about the prompt writing side of
it. I think this is a good one. This is a good thing, just business and life advice, right?
Almost in a way, start with the end in mind, which is what do you actually want the LLM to produce?
What do you want Gemini to produce? What do you want to be able to produce an outline list, you know, a
blog post, social media, you know, that sort of thing. What is the actual thing that you're
looking for at the end of the day? And sometimes for me, it's like brainstorming, like, I use
track TBT the other day, this will be the presentation that I'm doing next month for
the VRMA foundations event in Savannah, Georgia. And I was trying to describe a concept that I know,
in my head, kind of what I want to describe, and I couldn't actually get it Georgia. And I was trying to describe a concept that I know in my head kind of what I wanna describe
and I couldn't actually get it there.
So I said, I need you to help me come up with this concept.
I need like a fun little like one or two sentence way,
ideally one or two word way to describe what this concept is.
And teaser, I will tell you down the road what that is,
but long story short, ChatGPD helped me come up with it
based on its thinking modes,
like the four thinking modes, right?
So my end in that case was like,
I need you to help me brainstorm X, Y, Z.
Here's some initial thoughts that I have, take that.
What do you think?
Give me some different ideas to come back with.
And I think it's really, really good for that.
A lot of what we do in ChatDBT for sure,
like Google Ads Copy is a good example.
Like here's a bunch of information
from the client's landing page.
The deliverable I want from you, Mr. ChatDBT,
or Mrs. ChatDBT, who knows,
is I want 15 headlines of X number of characters each.
They clearly describe the client's benefits and features
and so on and so forth.
So if you start with that end in mind,
I think you get a lot closer.
If you just say something very broad to these LLM tools,
write me a blog post about the best things
to do in Minneapolis.
You end up with pretty low quality content,
generally speaking.
So I do think that's the first rule there role there. What's what's kind of been your experience with like that initial
set of like telling it what you want? What's been your thoughts? You have to be as clear as possible.
I related to talking to a child. And that's something I talk with my wife about a lot is that
I have to be as detailed as possible to make sure there's no wiggle room. Because when we
leave that room for interpretation, oh, boy, is it always interpreted not exactly how we want there. So I think that's, I mean if you're
if you're a parent of young children, think about it like you're giving
instructions to your five-year-old. Like go up and brush your teeth.
Does that go up and brush your teeth? It's go upstairs, grab your toothbrush, put
toothpaste on the toothbrush, go through that entire process. And again, it probably seems a little silly
when you're writing that prompt.
But truly, if you go into that depth and detail,
you are going to get a way better output.
So, I mean, that's my rule.
That's become my rule of thumb of writing better prompts
is write it like I'm talking to my five-year-old.
And sure enough, I get better outputs
and I don't get top feedback from the five-year-old and sure enough, I get better outputs and I don't get top feedback
from the five-year-old. So it's win-win for me. Yeah, I think it's a good role of life in general,
you know, like over-explain things. We've talked about this at length in our onboarding process
lately with our CSM team, which is like, have you ever heard a client complain that you've
over-communicated to them? Like, I'm coming up on 10 years now, hasn't happened yet. So I'm like,
until we hear that complaint, I'm going to keep encouraging us to communicate more, tell them more information about what's going on and so on and so forth. And if that complaint does come in, we can dial it back, right? But it's like people like go from levels one to two, and they're like, Oh, I like to turn my communication up a little bit. And it's like, No, dude, like you could have put it to 10. And the client would have told you to stop at 20. You went from one to two, which is not really a big difference. So I feel the same way about kind of instructing these LLMs, which is like, yeah,
be very clear, be very direct with what you're trying to accomplish. And, you know, it's just,
it's just a good way to start the prompt on actually is like, I am looking to do I'm looking
to build, you know, x, y, z. Well, let's go over number two, then. And I think you already had to
take on this. Initially, I'd love to hear your perspective, because although I have run plenty
of prompts through Gemini, I bet our usage is probably 50 to one, like you probably sent 15
more through Gemini than I have, and probably vice versa on ChatDBT. So the idea of giving roles in context,
I'll be honest, I did not do this at first. And I thought like, this seems silly. Why am I telling
it what it is? Like, so for example, you are a senior copywriter for a digital marketing agency
that works with vacation managers. And now, by the way, ChatDBT kind of knows some of these things,
because it like saves your past chats and history and it kind of learns from you, which is really interesting.
But anyways, broadly speaking, telling it each time of like who the LLM is and what context they should have.
I think the reason I was doing it at first was it just felt kind of silly and I didn't understand the point.
Now that I've done a few hundred, again, maybe a few thousand prompts, I now see the difference in the output.
Here's kind of also what I, this is another take I have, and I don't know if this is true.
This is just my perception.
As these LLM tools have gotten smarter and smarter, and they've gotten more and more
knowledge that they're trained on, I think the fact they know everything is both a blessing and
a curse. Because what happens is when you take the averages of everything from a copy perspective,
they're trained on both great high-performing converting copy, for example, let's say for an
email newsletter, and it's trained on very bad, mediocre, just junk copy. So if you ask it,
write me an email newsletter,
subject line based on, you know,
I want people to open my email.
Hey, it's gonna give you some ideas.
Some of them are probably gonna be decent,
some are gonna be bad, some are hopefully gonna be good,
and then you can get something there.
But when you say act like a direct response copywriter,
like David Ogilvie, for example,
was the one that we talked about in the outline,
and write a headline for this particular service
in this way, it now goes, this is my perception, I can't prove this, it's just been my experience using it,
and now narrows down to their whole training sets and say, okay, let me go to my little file here
in my database of David Ogilvy stuff, you know, I'm trained on that in theory, what I'm assuming
that is happening is that's mimicking that and not mimicking everything. I think mimicking
everything is almost like ending up with like gray paint, right? Like we take all the paint colors,
mix them together, we get gray paint. Whereas you're telling it, no, I want something bright.
Use a bright red paint here.
Use a bright green paint here.
And that's what I think you're doing
by giving it that type of instruction.
So I now am doing this not on every prompt,
but regularly, particularly for specific types of copy.
I really am giving it that specific context
of act like this person or act like this thing.
And I've seen much better output from it.
But you say you don't do this quite as much.
What's the process you've learned on Gemini in that respect?
I mean, I think that that's, that's something that I feel I
feel silly trying to tell someone what what they are what
they are. But I think, I think I'm giving it the tools to like,
that is a that's that's a role. That's a role and responsibility
senior copywriter for digital marketing agency. I think it scares me that that a robot a LLM and AI
being now has a set of qualifications to be that row
and it's gonna learn and do those things. I grew up with
Terminator movies. This is something that I'm still coming
to grips with. And I think maybe that's half of half of what I
see.
It's key for us to learn this ball. It's also key for us to learn how to or teach
them how to learn this as well, you know, so a few years from now when they're
trying to take over we can be like they're coming up to us like you are a
nice librarian who is not going to injure me and then the term the T 1000s
like oh it's okay like he's a friend you know. I think it is I think more I've
done it more not at an individual role level at like the company level I think it is. I think more I've done it more not at an individual role level at like the
company level. I think that's where my descriptions have been a little more. And I don't think
of that as a role per se. It's you are a vacation rental management company. You are this you
are a used podcaster for one we're doing stuff. But that is I think that when we put enough
specifics and specificity into the details of the prompt itself without that role I think it still delivers a
pretty solid outcome. What I would love to do, you're a senior
copywriter, now the second prompt is you're a senior editor for a digital
marketing agency and then having them come through and try to understand the
context there. That's where I could see you could start to build out kind of a
multi-role or multi-step or
running something through a clay system where you're running four different prompts, the content through four different prompts
at different levels within the organization. But does the editor have, you're really going to have an editor role as opposed to a
senior copywriter role going to have very different specifications that you're going to have to put in there. So yeah, I think most of what I use the LLM for, I
don't know, it is more for an answer as opposed to an end product. I think
there's some content writing and stuff like that, but most of it still
is for an answer. I think like I still use an LLM more as a search engine than
I do as an active doer
and participator in the process. And Brian, is that something of probably going back to
Terminator? I don't want a Terminator to come after me and do anything like that. So I think
that it's still effective. Does this make it more effective? I absolutely would think
that that's the case. And I think as, again, these LLMs are seeing the same, you are a senior copywriter and
you are seeing the same roles come up over and over, they have a better idea and understanding
of how to put out the content, put the output out there that's going to deliver so there's
not a refactoring.
Because I think from the other side of it, I don't think the LLMs, I don't think Chad
GPT, I don't think Google wants to have to give multiple
responses. You want that output to be exactly what someone's looking for. You don't want to have to
further refine. And that's something that certainly in the early days, we've had to refine some of the
content that's come through. I think it's gotten better, but just like Google wants to deliver the
right answer for anybody who does a search, they want to deliver the right output for anybody who's using the LLMs.
And then as a retention effort, that's the same thing.
And just, I mean, if you keep getting the right answer when you go to Google and you
go to Bing, you're going to keep using those search engines.
They know that you're going to, I mean, chat GPT knows you have to keep people there.
You have to keep giving them the right answer.
Google the same way.
So I think that that's certainly as much as we can build in,
it's going to be helpful.
But from the same perspective,
these LLMs want to deliver the right answer too.
So.
Yeah.
I guess the way I think about it too,
is I do believe that you're correct,
which is like, this feels a little clunky almost in a way.
Like I can't imagine it'll be this way forever.
I feel the way about a lot of the LLM things
that I do today is like,
like one of the most obvious things that I see is like,
why am I always hopping into chat GPT from another application?
Like I think one thing that I believe we have to get to soon enough is this idea
that like chat GPT or any sort of LLM type tool is at the system level.
And so I'm surprised really Apple's been really far behind in this respect.
And I assume either a they're just behind,
it's going to take them a while to catch up or they're planning something and we
just haven't seen it yet where it's like the system level AI is the thing that's most appealing to me.
Right now I feel like I'm bolting in a lot of third party, I guess like tools and integrations or extra things
that I kind of wish was just happening on the actual machine itself, you know, like for example,
I have a space kept app open, we're recording this podcast and you have the outlines in there.
It's like if I wanted to change the output of the outline, I have to grab it, go back into ChatDBT, refine it,
put it back in there.
I'm having to do a lot of hopping,
which is OK for right now.
I mean, it is what it is, right?
But I would hope, or I assume that down the road,
it's going to be just there.
And I don't have a ton of PC experience.
I have a PC in my garage that I use for other purposes,
as you know.
I'm not related to marketing at all.
And I see that in Microsoft, I have this ability
to right click and run Cop or like run stuff natively.
And I just haven't used it enough to really have enough context there.
So I don't really get how it works. Maybe I should play with it one day and like just kind of use some of those tools because that I feel like it's an interesting direction.
Because I almost wonder if I could right click say like, all right, here's what I'm trying to do.
This is the like a pre-built almost like prompt or pre-built little engine to say like this is what I'm working on right now.
Click here. Here's a little bit of information to go from there.
Also, I don't know. what's your perspective on this? Do you
ever use voice to input stuff on desktop or do you only ever type it? What's your experience on that?
This wasn't our outline. I'm just curious. So that's something where if it's going to be,
and I do it on my phone just because I want to keep talking and I want to make sure the prompt
hears everything there. But I do the same thing. Like it's going to be a little more in-depth,
and you know I kind of like to ramble from time to time there. And I do, I like having just kind of
giving it the full context and letting it kind of read and react and do things like that. So
I think I do a better job of, personally, I do a better job of verbalizing things sometimes than
I do. It takes me a while to write things out and make those connections.
So I absolutely do that.
I wouldn't say the output is any better, any different or anything like that.
You feel like your input is better because you're giving it more.
Exactly.
I feel like I'm giving it more, more context and a better full description because it is.
The one thing I do like about it, it's able to follow my tangents.
And that's a good thing because that's the best part about just being able to blurt it out and get it all out there is that you're going to get all those details in that if you're typing it out.
I got fat fingers. I type pretty it easier to make a more comprehensive prompt.
You can kind of think about it from, I've done it while I'm reading results from Google
Ads or Google Analytics without copy and pasting that same data and that same information into
a prompt, just reading it off and trying to pull it all together that way, kind of get
that story and get a summary or some type, you know,
what type of insights can I pull from this? Google's got their own insights, but what type
of insights would you as an LLM, as a marketer, find that are kind of pulling all this together
there? Yeah, maybe I can explore that a little bit more. Right now I do, yeah, I would say 90%
of my input on desktop is through strictly typing it out or copy and
pasting and modifying the prompt and going from there.
But maybe it's something I could play with a little bit more to explore more voice.
Because I think in theory, going back to the second point here, just with respect to giving
it more information, it does feel like a more natural way of doing it.
I'm using one of these tools called Granola for the AI meeting recording.
It's the only one I've actually liked.
It doesn't have a little bot that joins the meeting, which I find those bots kind of annoying
that join the meeting, just generally speaking.
But granola has been nice
because it gives you both the transcript,
like full both sides of the conversation,
the full transcript.
It also gives you the ability to like,
hey, here's our pre-written summary of the meeting.
But if you want to go in and tweak it, you can do so.
Like all these other tools that I've used,
they give you a summary and you can't really change it.
It's just like, hey, here's our summary of the meeting.
This one you could actually run basically different prompts to get back a different summary of the
meeting. So if it's with a team member, there isn't always necessarily to-dos for either side.
It was more just a discussion around, hey, here's some things we're struggling with. What do you
think about it? What do I think about it? That's different in my mind than I have a client meeting
and it's like, at the end of this, there'll be five to seven to-dos that will spawn out of it.
Some of them may be my responsibility. Some of them may be for some of my team and so on and so forth. So I want to use different
things to summarize that activity, which that works so much better in voice. And it takes,
you know, it's a funny quote, when I copy one of these transcripts out of granola, like
you're saying about like words per minute almost, right? Like some of these things will
be 20, 30,000 characters, because it's like a lot of, you know, discussion happens and
they're transcribing the whole thing. And then it's like, it's actually amazing to me
that like, it seems to nail it pretty much.
It's so rare that it misses something because a lot of stuff in there was
just not actually the core of the meeting.
You have a one-hour meeting and like I said, there's five actual action items
out of a one-hour meeting maybe that you actually needed to do something with.
But the rest is just a little bit of fluff, a little bit of nonsense,
a little bit of small talk at the beginning, all this kind of stuff.
And it's interesting how well AI actually does a good job of getting to the core of the issue very quickly, which is something to consider.
So interesting stuff there. All right. So we've kind of covered so far, starting with the
ended mind, giving it roles and context, working on your input when you're giving the roles and
context, giving it very detailed instructions, either written or verbal, or obviously some
combination of the two can kind of get you the right spot. Let's talk about examples or giving
it some maybe some training or some like example type of copy
of what you're looking for.
I think this is actually probably the one that,
you know, you can get all the way there.
And then if you don't give it the example
of what you're after,
that's when you were like,
I tried it and like, it didn't work for me.
Like I would say,
when I've seen people not give it the right example,
that's when it really fails.
Because when you give it the example,
then it's like, really we're not taking the AI
at its own word of like, just write an article for me.
We're actually saying like, here's an article that I wrote,
mimic my style, mimic my writing examples.
So with the version two of the book I'm working on right now,
Claude has actually read my entire book.
Because in the project view,
I can give Claude my entire book, barely by the way.
Like there's a million context window,
a million token context window,
and my book is like roughly 800,000. So I can't give it much more than that.
But like that alone is like, Hey, that's enough of my writing to be like, here's
what I did. Here's the book. I know I'm going to go and update these chapters and
so on and so forth. So now that's an extreme example of very, very long book,
hundreds and hundreds of pages. Um, but you can do the same thing with any copy.
Like here's my style of blog writing. Here's my style of email writing. You
know, my signature is like, I always do like all the best is
my signature dash Conrad, right? Like, if you give it that it's
going to make sure it uses that I would start emails with, Hey,
I'm just a hey person, I'm not a hi, or like a hello, or like a
regards person. So it's like, if I was writing my email for me, I
wanted to say, Hey, that's how I open an email, right? So that's
just my style. So I think that some people probably don't do
this. And then they get disappointed by the outcome. And
they were so close to it, like they could have got it there. And then they just like, just needed to give a 10 or 15 examples of like what they're after. And then I think that some people probably don't do this and then they get disappointed by the outcome and they were so close to it, like they could have got it there
and then they just like just needed to give it 10
or 15 examples of like what they're after.
And then I think they would have gotten way better output.
So this is one I've actually been doing for a while.
I would argue maybe if you do this well,
maybe that's why I've been able to get away
with not making my prompts as good on the other elements.
Cause like, if you get this right, this helps a lot.
What's been your experience here as far as like giving
an example or a sample work to mimic from?
I think that that's been something that I plugged it in for my own social media stuff and using other clients, social media, trying to keep up that, keep in that voice. Because I think that's something that that can be a struggle is keeping it in that branded voice and that branded output.
So being able to put in social media posts, put in video inputs, put in... you have in the book,
that's a huge piece of input there, but anytime you can give examples and giving
an example of a good versus bad. I think that trying to educate the LLM
there a little bit as well. This is what I'm looking for, this is not what I'm
looking for. And just again, it's everything we talk about, it's about making sure you're giving,
you're putting the right information in there.
These systems are smart.
Like we don't have to overthink it too much,
we have to think logically that if I do,
if I put in 10 headlines or 10 headlines
of good examples of Google ads,
and I put in 10 examples of bad headlines,
Google's going to learn right away,
Google should know regardless, but I digress.
They're gonna know that this is what you should be, this is what's going to work.
This is what I'm looking for. This is again what I'm not looking for there.
So I think that anytime you're able to kind of anchor
and get back to something that you're looking for,
again, it's not about getting the right answer, it's getting
the desired output there. So like when you're educating it to get that output, I think that's
always going to be beneficial for us there. But again, kind of going in that same vein, you know,
being kind of not being afraid to the progressive prompting. This is again not what I like. I don't
like the follow-up, but you've got the example here. Give me three email subject lines for our Black
Friday sale at Beach Rentals. Okay follow-up. Make them more curiosity driven.
Include a sense of urgency. Final prompt. Now expand the best subject line into a
full email copy and CTA with urgency. That is something where that's a very
clear progression and I think that's something that people who are stopping
after that first prompt or that second prompt are. They're not getting ultimately what they're looking for unless
they were satisfied with that answer. But I really, that is another area where I, again,
I think I get to that first prompt, sometimes that second prompt, but there's still more.
Like there's more that we can drive. And sometimes it's a time thing, patience thing.
You know, we could really go down the line
of what is stopping someone from continuing to go further.
But is that something that you do a lot
as you're trying to work through your chain of prompts there?
Yeah, I think for me,
this is also one thing I learned early on
because I was like, all right,
the first thing I was trying to figure out,
actually one of the most valuable use cases for me with AI lately has been proposals or writing scopes of
work based on a sales conversation. So I tried to so-called one shot it right here. Here's the
transcript from the call. Here's what I'm looking to do. Write everything, write the goals, write
the proposal basically. And it gets, I don't know if it's lazy. I don't even know what the,
I don't know what the problem is that there's like, it can only output so much at a time,
or tries to make sense of it all. And like, then cast to
compress it down and put things into this whole, you know, one
output or one response type thing, but I quickly realized
that's not the right approach, we need to actually break this
apart and basically have a series or a chain of prompts to
get everything right for our proposal. So like, for example,
a goals prompt gets you the goals section, right, then the
next one would be scope of work. Then the next one would be pricing
or maybe a subset consideration of that.
Again, helps if you have another proposal
because then it can trade off that data
or model off that data.
So it's like, it does take a little bit longer
to do it, I guess so-called the right way
would be the way that I would describe to someone new to this
to like have to do multiple ones to get the right output.
But once you've kind of trained that little thread
or that little conversation you have going with TrackDBT
on a specific thing and you did the big prompt
in the beginning, you gave it all this information gave this context, I find it is faster to like get
things out of that you can refer back to it. So for example, one thing I haven't done yet that I'm
considering is having like just an ongoing thread for every client and just like putting things in
there and basically being like, here's what happened this month, here's what happened this
month, here's what happened this month, I have had issues from time to time about like it kind of
forgetting eventually. I've heard a lot time to time about it kind of forgetting eventually.
I've heard a lot of conversation about that on Twitter specifically,
where people will say, at some point, the AI within a thread in ChaiTPT
does tend to forget what your original prompt was,
and it'll kind of start to go off track again.
And I used to try to fix it, and I learned a few days ago,
don't try to fix it, just try the new thread, basically.
You could drive yourself crazy trying to get the AI to work again
back the way it should, or get the output back the way you want it to again. So if you feel
like the it's not working or it's bad, just start a new thread. Like there, you know, it's free,
right? Basically, if you're paying for a subscription, you know, you should be fine in that respect. So
I have done that for sure. If it's not, if it's not working, or eventually at some point, the
progressive prompting is not giving the right output. It's talking about something unrelated,
or it's ignoring my training information that I gave it. I do think from time to time, you have
to start over, which again,, seems like one of those things
that are six months from now, like you were saying earlier in the update episode, will be like, oh,
no, you could have a thread going for a year and it's fine. But right now it feels like that's
kind of one of its little weak spots, just like the picture generation that we talked about at the top
of the call. So yeah, I'm a big fan of that. I've also kind of been doing that for a while and I've
seen a lot of success from it for sure. All right, well, number six in our outline here
was not skipping constraints or not skipping
the exact output of what you're looking for.
I think I'll tie in skipping constraints
with our first thing that we talked about,
which is defining in the output what the final state should
look like.
So for example, with the goal section of the proposal,
I'll just keep going down that example here for a moment.
I'll say, write me three bullet points,
write me four bullet points.
And so I'm giving it that context, that information of what I'm looking for moment. I'll say, write me three bullet points, write me four bullet points, right? And so I'm giving it that context,
that information of what I'm looking for.
If you just say, write me a goal section,
it may or may not give you what you're after.
And sometimes I find it's too, like it goes too long
or it's too conversational
and there's too much information in there, too much content.
And sometimes I find it like cuts a corner
and like there was 10 things I wanted to go through
and then it only does three or four.
So that can be a little bit frustrating.
So that's kind of been my experience there
on the constraint side is like, you can give it word counts.
The trouble is that word count doesn't match up with tokens,
which is how chat GPT outputs content.
So for example, I don't know if you've seen this,
write me add headlines and they must be 15 characters each.
And then they'll make a bunch that are like 20.
And I'll go back and ask it like,
oh, why isn't this 15 characters?
And it basically can't,
it doesn't understand characters and understands tokens.
So that's something I don't fully understand
like why that's the case, but I do know it's a problem from that side. So that's something I don't fully understand like why that's the case,
but I do know it's a problem from that side.
So what's been your experience here?
Like does AI listen to you on the Gemini side?
Cause it doesn't always listen to me on the chat GBT side.
There's some hallucinations.
I always think it's anything,
anytime it's not listening,
it's hallucinating to a certain extent there.
So yeah, there's definitely hallucinations there.
I think when I'm giving it specific instructions
and it's disregarding those, that's concerning.
So that's, it's not unique to chat GPT by any means.
It's universal across everything.
But I do that.
I like, I always, when I see something
that's clearly off or wrong, I'm like, oh, hallucination.
Okay, sometimes the prompt back is,
okay, can we not hallucinate something in here and do something like that? So I hope that that gets better. Like, I don't know
if it's a listener that's being used in the AI or what it is. But I think there was an
example from like a quarterly earnings call for a pretty big company, like right after
Google had announced some of this AI stuff about that dairy company,
that there was a clear hallucination about a specific type of cheese, a variety of cheese.
And if you're not catching that-
I dream of cheese.
I don't hallucinate about cheese, but-
I think that's a whole nother thing.
And we have to have our own internal editors to be able to say, oh, guys, we didn't just
copy paste and put it right through.
But I mean, that's that's the type of stuff that's happening now. So I think you just have to know,
I probably haven't said what I needed to say there, which is chat, GPT and Gemini are first
draft tools. I know we said that previous things, right? These are first draft tools, not final
draft tools. They are a junior employee, not a senior employee, despite how smart they are.
So yeah, I mean, I think that that's the key there is that it should never be your final output. And I mean, I
remember going back and initially when you saw a whole
bunch of Gemini written content or chat GPT written content, all
of them had regenerate this response at the bottom. Oh, so
bad. And it was just so painful. But I saw one on LinkedIn the
other day where someone was like, here's your revised post.
And then someone copied that line, like, here's your revised post.
And then they paste it in there.
It's like, dude, dude, dude. Yeah. Yeah.
Yeah. No, no.
No, I know. I know we're time here.
So I have two parting thoughts here.
My first is ask the GPT to ask you questions until it's ready to go.
I think that's super valuable.
So if you don't have enough information, don't give me a response.
Keep asking questions.
Will you have enough information? Then you get a response.
I said that the other day and asked me like nine more questions.
And I was like, Oh, that was pretty impressive.
Like I hadn't seen for, I'll do that before.
So that was pretty good because I think the final output was
better because of that last one.
You can ask the AI to give you a prompt.
So when you get the final output, you're happy with it.
Say I want to recreate this in the future.
Give me a prompt that I can use to recreate this ad, recreate this email
copy, whatever the case may be.
Took me way too long to learn that one.
Once I learned that one, I'm like,
oh, I actually got what I wanted here.
I don't have to go recreate this castle,
this building from scratch.
I could just ask and it'll do it for me.
So don't be like me, don't make a mistake there.
Just when you get happy with something
and you're like, yes, finally, this is what I'm after.
I'm gonna need to do this daily or twice a week
or once a week, whatever the case may be,
for the next five years of my job,
say, give me this prompt or give me, you know, you just give me the final state.
Excellent.
Give me a prompt that I can reuse to generate this again.
So those are my two little hacks at the end there that I've saved a lot of time and gotten
better results from the AI tools.
So I'm with you, Paul.
Let's redo this down the road.
One thing that listener won't have to redo is leave us a review because they've already
left us a review.
Haven't they?
No, the data would indicate they don't because we know there are thousands of you that listen.
Our data indicates that and we don't have a lot of reviews.
So if you made it all the way in, go to your podcast app of choice, excuse me,
leave us a review. Forget my stumble and leave us a review. We appreciate that.
And we'll catch everyone on the next episode. Thanks so much.