Loading...
Loading...

In this solo episode, Heather explores one of the biggest shifts happening in the AI space right now: the move away from relying on prompt libraries and toward building systems powered by context.
A couple of years ago, prompt engineering was everything. People were collecting prompt packs, saving templates, and searching for the "perfect prompt" that would magically produce the right result. And at the time that made sense. Early AI tools needed much more structure, narrower inputs, and a lot more hand-holding.
But things have changed.
Today's AI tools are better at understanding natural language, asking follow-up questions, inferring structure, and working inside persistent environments like projects, custom assistants, and knowledge bases. That means the real advantage no longer comes from having a huge library of prompts, it comes from giving AI the right context.
Heather unpacks why prompt libraries are becoming less central, what should replace them, and how short-term rental professionals can start building smarter AI systems that actually reflect their business, voice, standards, and guest experience.
If you've been wondering why AI sometimes still feels generic, even when you use "good prompts," this episode will help you see what's missing - and what to build next.
________________________________________________________________________________________________________
This episode is brought to you in part by the STRAIT Chat Community - an exclusive space for short-term rental pros who want deeper conversations, smarter systems, and more collaborative learning.
And if you're curious about using AI in your business - whether it's for guest communication, operations, or even marketing - check out our AI Ambassador Program. It's designed to help you learn, lead, and get hands-on with the tools shaping the future of hospitality.
🌐 Learn more and join us at VacationRentalFormula.com/AI
Are you listening to this podcast on the move? Get to the show notes here:
https://www.vacationrentalformula.com/podcast/VRS655
This is the Vacation Rental Success Podcast, keeping you up to date with news, views, information
and resources on this rapidly changing short-term rental business.
I'm your host Heather Bayer, and with 25 years of experience in this industry, I'm making
sure you know what's hot, what's not, what's new, and what will help make your business
a success.
Well, hello and welcome to another episode of the Vacation Rental Success Podcast.
This is your host Heather Bayer, and as ever, I'm super excited to be back with you once again.
So we've got 10 days to go before we head home, and quite honestly, I'm looking forward
to getting back.
These past few days and golf shores have been pretty cold, so it's reminded me how much
I like the cooler days, and it's going to be nice to be back in my office once again.
I'm sure it'll be a little bit of a shock to get home and find there are still huge
mounds of snow in my driveway and in my garden, but before we know it, it will be done
and gone and out of the way, and spring will be here.
It's been very nice actually experiencing spring here in golf shores, and all the new
leaves coming on the trees because I get home and I have that twice, and spring has always
been my favourite month, so that's a huge bonus.
So today, I want to talk about something that I think is going to ship the way you think
about working with AI, not just in your short term rental business, but in everything
you do.
So I'm going to ask you a question, cast your mind back a couple of years.
Do you remember when everybody was building prompt libraries?
People were saving hundreds of prompts, you couldn't go anywhere on Facebook without
somebody offering prompt packs.
People were sharing lists of the best chat GPT prompts on social media.
They were talking about prompt engineering like it was going to be the career skill of
the future, and I know of people who signed up for courses, university courses in prompt
engineering, and they were two year courses, and that was only about 18 months ago.
So it's been interesting to talk to them now to see how they feel about the course they're
doing, and well, hopefully the syllabus changed along the way because everything has
changed along the way.
So I was paying really close attention to all of it, and you know, quite honestly, I nearly
signed up for some of those things, but let's move on because we have moved on.
Not because prompts don't matter, they absolutely do, but they matter a lot less than they used
to, and what matters a lot more is something that most people haven't even started building
yet.
So today, I want to walk you through what's changed, why it's changed, and what you should
be doing instead, because if you're still thinking about AI through the lens of what's
the right prompt, you're playing a game that the tools have already moved beyond.
So let me give you an example from what I've been doing the past few weeks.
I've just finished building a full web application using a tool called Lovable.
But I didn't sit here writing carefully engineered polished prompts, so what I did instead
was describe what I wanted in natural language.
It was pretty much a ramble.
It wasn't particularly articulate.
In fact, it was completely inarticulate as I was listening to myself speaking.
But just as an aside, I use a tool called WhisperFlow, which allows me to dictate to any platform.
If I'm writing an email or a Slack message or I'm talking to Claude or chat GBT, I just
put the cursor in the space, I want it to be press a button and start talking.
When WhisperFlow takes that ramble and turns it into something that is very much understandable.
So I'll put a link to WhisperFlow in the show notes and you can go and check it out.
But back to my Lovable app creation.
So I created this prompt and then Claude responded to me and I reacted to that response.
And we went back with them forwards as we refined things.
And I invited Claude to suggest things.
And it actually felt more like collaborating with a really capable colleague than what
I was actually doing, which was programming a machine.
And the more I do this, the more I realize how much the tools have changed.
It means the way we work with them has to change too.
So today we're going to talk about prompt libraries where they came from, why they made
sense in the first place, but why they're no longer the most valuable thing you can build.
And then I'm going to tell you exactly what to build instead.
So to understand where we are now, we need to go back to where we started in the early
days of mainstream AI.
And I'm talking about the period when chat GPT first exploded and it was what, very early
2023 and everybody suddenly had access to these amazing tools and prompting was everything.
But the models then were much less capable.
They had smaller context windows and they needed a lot more hand holding to do any useful
work.
So a lot of clever people figured out frameworks and you probably heard some of them, things
like give the AI a role, give it a task, tell it the format you want, set some constraints,
and ideally include an example or two of what you're looking for.
Role, task, format, constraints, examples, that was the gold standard.
There were a bunch of acronyms that outlined this process, some of which are still relevant
and I'm not discounting them all together.
And prompt library is made complete sense in that world.
You found something that worked, you saved it, and then you reused it.
It reduced what I call blank page syndrome, the one of the older days we used to call
writers block.
That moment when you open a tool and you've got no idea what to type.
And a library of prompts gave you this running start.
It helped people learn structure and it gave people confidence and a lot of people and
I include myself in this, treated prompts almost like magic spells.
Like if I can just find the perfect prompt, I'll get the perfect output and we talked
a lot about garbage in garbage out.
And there was this chase for the ideal wording, people sharing things like the best prompt
for writing emails or the ultimate chat GPT prompt for social media.
That approach had a hidden problem, a lot of people collected prompts, they didn't
fully understand.
So when the context changed, when the job was slightly different or the audience was different
or the platform they were using was different, the prompts stopped working effectively and
they didn't know why because they never really understood what made it work in the first
place.
So the prompt library created dependency, it was useful but it really did have a ceiling.
But for where the tools were at the time, prompt engineering was genuinely valuable and
learning to communicate clearly with AI, learning to structure your requests and learning
what that specificity matters, all of that was important and it still is.
We're not throwing any of that away today but the tools have moved on.
So let us talk about what's actually changed and why prompt libraries are no longer the
most valuable thing you can invest time in building.
Modern AI tools can do several things that early models couldn't.
They can interpret natural language much more accurately, which means you don't need
to be as precise with your wording to get a good result.
They ask follow up questions and I'm sure you've got frustrated with chat GPT when it
always asks follow up questions at the end of an output and they're usually really good
questions so you end up with this mega response because you've gone off on some tangent
that chat GPT has taken you on.
They can even help you write better prompts for yourself.
And I often do this.
If I'm writing instructions for a GPT or for a skill in Claude, then I ask the platform
to write the instructions for me so I'm asking it to write the instructions.
But perhaps most importantly, they can now operate inside what we call persistent environments,
things like projects and custom assistance and knowledge bases and structured workspaces,
where a lot of context is already loaded and available before you even type a single
word.
So here's the line I'm going to keep coming back to.
The prompt is no longer the product.
The system around the prompt is now the product.
So think about what that means.
A one off prompt, even a really good one, it usually lacks business context.
It doesn't know who you are.
It doesn't know who your guests are.
It doesn't know your voice, your policies, your standards, what you're selling to them.
Every time you use it, you're starting from zero.
And a saved prompt can also become brittle.
The same prompt behaves differently depending on which model you're using, which tool you're
using, and what project or knowledge base it has access to.
And even what time of year it is if your content seasonal.
And as for teams for anyone who has staff or who wants to delegate AI tasks, a folder
of prompts doesn't actually create repeatable outcomes.
What it does is creates inconsistency because every person uses the prompt slightly differently
or picks a different one or skips them entirely.
So let me give you a really concrete contrast that I think makes this click.
The old way here is my email writing prompt.
I paste it in, I fill in the blanks, I get an email.
So the new way is, here is my assistance.
It knows my voice, it knows my audience, it knows my offer, it knows the kind of email
I usually write, what to avoid and what tone works for my guests.
I tell it what I need and it produces something that actually sounds like me.
One of those is a tool, the others a system, and the system wins every single time.
So if we're demoting the prompt library what we're building instead, I want to walk you
through this as a progression because it's not one thing, it's a layered system and
each layer makes the next one more powerful.
So the first shift is moving from one-off conversations to projects.
Most of the major AI tools now support some version of this and a project is essentially
a container for a specific type of work.
Instead of starting a new chat every time you work inside an environment that remembers
what you've told it and keeps your instructions active.
Now when I'm talking here about projects, this is the Claude environment.
You may already have made GPTs within chat GPT and that's a custom GPT.
But what does that mean in practice?
It means less re-explaining and it means continuity.
It means you don't have to paste in a wall of context at the start of every conversation.
The project holds that for you.
For a property manager, a project might be guest communications.
You set it up once with your policies, your tone, your common scenarios and then every
time you need to write something for a guest, you're working inside that context already.
So layer two, custom assistance.
You might know these as custom GPTs or gems in Google's ecosystem or similar features
depending on which tools you use.
These let you define in advance exactly what this AI is for.
You tell it, here's your purpose, here's a role you're playing, here's the tone I want,
here's what good looks like.
That one's so important.
Here's what to avoid and you can attach knowledge, documents, examples, reference materials
so the assistant can draw on them.
This is where things start to get genuinely powerful because you're not customising a prompt,
you're building an AI that understands its job.
So now we get to what I think is the real shift and that's knowledge basis.
Without your specific knowledge, AI is generic, it can produce competent, very professional
output but it won't sound like you, it won't reflect your standards, it won't understand
the nuances of your properties or your guests or your market or your brand.
When you attach your own documents like your SOPs or your guest communication templates,
your brand voice guide, your own onboarding materials, your property descriptions, your
FAQs, the AI stops being a generic writing assistant and starts becoming something that
actually knows your business.
Think about what you could include, you could include your welcome guide, your house rules,
seasonal promotions, common guest questions and how you answer them.
Your own communication style, your review response strategy, all of that becomes available
to the AI as context.
That is a completely different tool from a prompt library, a prompt library is passive,
you look things up and you copy and paste it, a knowledge basis active, the AI draws
on it without you having to think about it.
Now, here's a phrase I want you to hold on to because I think it's the most useful framing
of everything I'm talking about today and it's a context library.
A context library is different from a prompt library.
Where a prompt library stores the questions you ask, a context library stores everything
the AI needs to know to answer well, it's the background, the business intelligence
and the reference material.
So a solid context library for a short term rental professional might include your business
overview, what you do, who you serve, where you operate, your values, your mission statement,
etc.
You'd put in an ideal guest profile, who your best guests are, what they're looking
for, how they communicate.
You'd put in some examples of your listings, what each property provides, what makes your
business unique, your brand voice, how you sound, what words you use, what tone you
want to hit, the words you don't want to use and then you'd put in your policies, check
in times, pet policies, cancellation terms, past examples of great guest messages, great
owner reports, great listing copy, your own frameworks for how you make decisions.
This is not necessarily one huge document, you make smaller documents and in a later
episode I'm going to talk about how you create these documents because you don't need
all of this on day one, but over time building this library is one of the highest value things
you can do for your AI assisted work because the richer your context, the better every
output becomes.
So the final layer is skills and repeatable workflows.
These are structured step-by-step ways of getting specific work done consistently.
Kind of asking the AI to improvise every time you give it a process to follow and skills
are a great part of the Claude ecosystem and a lot of what I'm doing now is in Claude
and not in chat GPT because I feel it is just a much better platform.
I think I probably spend 10% of my time now in chat GPT if that because I've just about
moved everything across now.
To talk about skills, you think of a skill as a recipe for a task.
So it's a little bit like an SOP right a guess welcome message here are the steps, here's
the tone, here's what to include, here's the format, here's what to avoid.
The AI follows a recipe and you get consistent results every time, regardless of who's running
the task.
For teams this is transformational, it means you can delegate AI assisted task with confidence
because the system does the heavy lifting, not individual judgment and it's consistent.
So all of what I've just described projects, custom assistance, knowledge bases, context
library skills, do all has a name, a RAPO if you like, it's called context engineering.
And I think it's the most important shift in how we work with AI that most people haven't
fully understood yet.
So the simple definition is that context engineering is the practice of giving the AI the right
ground, structure, references, examples and operating rules so it can produce consistently,
useful work, not just once but reliably and repeatedly across your entire operation.
So I find it helpful to think about context in five parts.
So number one is task content, what we're actually trying to do now.
Secondly, business context, what does this business do, who does it serve and how does
it operate, thirdly audience context, who is this output for and what do they need
to feel or understand, fourth quality context.
What does a great output look like and what does a poor one look like?
And fifth, resource context, what source material, documents or examples should the AI be
drawing on.
When you think about context in those five layers you start to see what's actually missing
from most people's AI work, they're giving the AI task context such as write me a guest
message but nothing else and then they're disappointed when the output is generic.
So here's a contrast I want you to keep.
Prompt engineering asks what words should I type?
Context engineering asks, what does the AI need to know to do this well repeatedly in
the real world?
That is a fundamentally different question and it really does lead to fundamentally better
results.
So let me make this specific for your world because I think that's where it really lands.
If you want an AI assistant that helps with guest messaging, that assistant needs to
know your policies and your property details, your tone, your most common situations and
how you handle the tricky ones.
Without that context every message is going to feel a bit off, a bit too formal, generic,
missing something that would have been obvious if the AI had known your business.
But if you want an AI assistant that helps you communicate with owners, say, it needs
to know how you present data, how you frame difficult conversations, what your owner relationships
typically look like and what success looks like in the way that you do your reporting.
And if you want an AI assistant that helps you write marketing content, it needs to know
your brand voice, your properties, your target guests and the platform that you're writing
for and of course with some examples of your best past content.
In every case, the intelligence isn't in the prompt, the intelligence is in the context.
And here's the thing that I think is the best of all this, once you've built strong context,
the prompting gets easier, not harder, because you're not having to explain everything
from scratch every time.
The system already knows it, you just have to tell it what to do next.
So I wanted to take a quick pause here and make something clear.
I am not saying prompts are dead.
I'm not saying you should throw out everything you know about communicating with AI.
Promts still matter.
I still use them every single day, but their role has changed.
They've moved from being the centerpiece of your AI work to being just one element in a larger system.
They still matter for starting a conversation and pointing the AI in the right direction.
They matter for directing focus, telling the AI what to do with all the context it has.
And prompt matter for refining output and for asking follow-up questions,
or for troubleshooting when something isn't quite right.
But the prompts themselves have changed in character.
They're lighter now.
They're more conversational.
So many people are now using voice instead of typing, which makes that conversation.
They're more iterative.
They're more like the directions you give a colleague who already understands your business,
than like instructions you'd type into a machine.
And there's another shift that I think is worth mentioning too.
AI can now help you prompt better.
When I'm working on something and I'm not quite getting what I need,
I just ask the AI to help me refine my request.
As I said earlier, I just do a ramble,
describe what I'm going for and ask it to suggest a better way to frame it.
I can share a draft I've done and ask what context I might be missing.
And that's the thing about context engineering.
It's not just something you set up once and forget.
It's an ongoing conversation.
You're always learning what context is most valuable,
what the AI needs more of and where the gaps are in your system.
And the more you do that, the better your system becomes.
It compounds over time.
So I want to walk you through a real demonstration now,
because I know for some people all of this is starting to feel a bit abstract.
And I want to make it completely concrete.
I'm going to use an example that I know every single.
One of you can relate to and we've referred to it before in this episode.
That's writing a guest welcome message.
But I'm going to give this same task with three different approaches.
And I want you to listen to the difference.
So the generic standalone prompt.
So you open up your AI tool and you type,
write me a guest welcome message for my vacation rental.
And you get something that's competent, it's polite.
It probably says something like welcome to our beautiful property.
We're so excited to have you as our guest.
It mentions to reach out if you need anything.
And it wishes them a wonderful stay.
It's fine.
It could belong to literally anyone.
It has no personality.
It doesn't know your property.
It doesn't sound like you.
And honestly, your guests can tell.
It's usable.
But now imagine you've learned a bit more about prompting.
You type something like write a warm, friendly guest welcome message
for a four bedroom beachfront property in your location.
Include checking instructions, Wi-Fi details,
and a reminder about house rules.
Keep it under 250 words.
That's better, noticeably better
because you've given it some structure, formal guidance.
You've given it some specifics.
But it still doesn't know your property's actual name
or the vibe you're going for.
Or that your guests tend to be families celebrating milestones.
Or that you always like to sign off personally
or that you have a specific joke you make about the coffee machine.
It's still generating something from scratch
based on your prompt alone.
Now imagine you're working inside a custom assistant
you've built for guest communications.
You've loaded it with your property details, your brand voice,
your welcome message template, examples
of your best past messages, and notes about your typical guests.
You type write a welcome message for the Smith family
arriving Friday for a five night anniversary stay.
That's it.
That's the whole prompt.
But what comes back is a message that sounds exactly like you.
It references the property correctly.
It matches your tone perfectly.
It picks up on the anniversary and acknowledges it
in the way that you typically would.
It hits all the practical details
without you having to remind it.
It's ready to send.
The difference wasn't that I found a magical prompt.
The difference was that I gave the AI enough context
to think usefully.
One prompt, three totally different outcomes
because the environment around the prompt
is doing all the heavy lifting in that third version.
That is context engineering in action.
And once you experience it working properly,
you won't want to go back.
So let's talk about what this actually means for you
and what you should do differently going forward.
And you don't need to throw out your prompt library
if you have one.
A small collection of trusted starter prompts
for your most common tasks is still useful.
You can still keep it, you can still use it.
But I want you to start shifting your investment.
Spend less time collecting prompts
and more time building the context
those prompts can draw from.
Here's five assets.
I'd encourage you to start building
or refining if you've already started.
So your first asset is a business context document.
It's your AI's briefing pack.
Who you are, what you do, where you operate,
how many properties, what makes you different,
what your values are.
What success looks like in your business?
Write this once, keep it updated
and load it into every AI assistant you create.
Asset 2 is a brand voice document.
How do you sound?
What words do you use?
What tone do you want to hit with guests?
And is it different with owners?
Do you have any phrases you love
or things you'd never say?
Include examples of few real messages
or pieces of content that you're proud of.
And this is gold for any AI assistant
that writes on your behalf.
Your third asset is your guest or audience profile.
Who are your typical guests?
What are they celebrating,
searching for, worried about,
what questions do they always ask?
This is your typical guest avatar.
What do they love about your property?
What do they sometimes get confused about?
The more your AI understands your guest,
the better every guest-facing output becomes.
Then Asset 4, that's a knowledge base
of your best materials, your welcome guides, your house rules,
check in instructions, your FAQ document,
your owner communication templates,
your review response examples, your seasonal promotions,
all of these become reusable context
that your AI can draw on without you having
to paste them in every time.
Asset 5 is a small set of reusable starter prompts
for your most common tasks,
not on library of hundreds,
but a curated set of maybe five to 10
that you genuinely use that you've refined over time
and that you trust to get you most of the way there.
So here's how I think about the relationship
between all of these things.
Your prompts are the steering wheel,
your context is the engine.
You need both, but one of them does a lot more work
than the other.
Build the engine and the steering gets easier from there.
So the question I started with today was,
do we still need prompt libraries?
Am I honest answer is yes,
but just not in the way we used to.
Prompts do still matter.
A handful of great starter prompts
for your most common task can absolutely save you time.
But the real competitive advantage,
the thing that actually makes AI transformational
in your business is the context
that lies underneath those prompts.
Prompt engineering taught us how to ask
context engineering teaches us how to prepare.
And preparation, as we know,
is what makes performance consistent.
And the better these AI models get
and they are getting better by the day,
the less you need perfect wording
and the more you need a well-designed environment.
Natural language plus strong context
plus iteration beats a static prompt library
in almost every world situation.
So here's what I want you to take away from today.
Build context not just prompts,
create reusable environments,
not just one-off queries.
Teach AI your business, your voice, your standards
and who your guests and owners are
because once it understands all of that,
it becomes a genuinely powerful extension of your operation,
not just a faster way to produce generic content.
Let AI help you refine the prompting as you go.
You don't have to figure it all out upfront,
just build iteratively and learn what works.
A prompt library is a toolbox,
a context library is an operating system.
And in business, we don't compete on toolboxes,
we compete on systems.
So I know I've gone over this multiple times in this episode,
but I really wanted to bring this message home
that if you get your context right,
if you build the foundation,
then the rest will follow and you will have
so much better results than you would ever have
if you were just simply using prompts as you have in the past.
So thank you so much for spending this time with me today.
If this episode got you thinking,
I would genuinely love to hear from you.
Maybe you could share it with someone in your network
who's been trying to figure out AI.
Just drop me a message or leave a review
if this show has been useful to you.
And if you want to go deeper
on any of the topics we covered today,
building context libraries, creating custom assistance,
setting up AI workflows for your short term rental business,
that's exactly what we do inside the AI ambassador program.
We'll be starting a new cohort in April
if you're interested in finding out more,
then go to the link in the show notes
and send me an email or set up a time
to give me a call and we'll have a chat about it.
I'll see you next week, take care.
It's been a pleasure as ever being with you.
If there's anything you'd like to comment on,
then join the conversation on the show notes
for the episode at vacationrentalformilor.com.
We'd love to hear from you and I look forward
to being with you again next week.

