Loading...
Loading...

These things are not reliable and in many cases they are not safe and people are definitely
racing ahead in using them regardless and we all could be collateral damage in that
grand experiment.
Welcome to AI Answers, a special Q&A series from the Artificial Intelligence Show.
I'm Paul Rateser, founder and CEO of SmartRex and Marketing AI Institute.
Every time we host our live virtual events and online classes, we get dozens of great
questions from business leaders and practitioners who are navigating this fast moving world
of AI, but we never have enough time to get to all of them.
So we created the AI Answers series to address more of these questions and share real-time
insights into the topics and challenges professionals like you are facing.
Whether you're just starting your AI journey or already putting it to work in your organization,
these are the practical insights, use cases and strategies you need to grow smarter.
Let's explore AI together.
Welcome to episode 202 of the Artificial Intelligence Show.
I'm your host, Paul Rateser, along with my co-host, Mike Putt.
We're recording Wednesday, March 11th, 9am, Eastern Time in the middle of a thunderstorm
in Cleveland, so hopefully we can do this straight through without any power loss.
This is a special edition, so this is not our regular weekly, so don't get confused if
you are a regular weekly recap listener on Tuesdays that we drop.
This is a special AI Answers edition.
So AI Answers is their 15th episode of this series.
AI Answers is presented in partnership with Google Cloud.
This is a series that we do based on questions we get from our monthly intro to AI and scaling
AI classes.
We haven't been to those every month.
I teach a free intro class and a free scaling AI class, and we usually get dozens of questions
during those live sessions, and so we use these AI Answers episodes to answer the ones
we couldn't get to.
But this is even more special version of our special AI Answers series.
This one is actually based on questions we got during our AI for Departments Week,
which was also presented in partnership with Google Cloud.
So AI for Departments February 24th to the 26th this year, we released three blueprints,
AI for marketing, AI for sales, and AI for customer success, and we did that with
a webinar each day.
Those webinars had thousands of people registered for them, and so the questions we got were amazing,
and we could not get to all of them during the live sessions.
So we decided we'll do a special AI Answers edition that answers questions that came from
our audience during those three webinars.
So Mike has curated this.
He's going to go through and pick some questions or marketing from sales from customer success.
I have not looked at them.
I prefer to do these the same way I do it in the live environment, where I don't see the
questions until Mike asks them.
So you can learn more about both of these, the webinars and the blueprints.
You can go to smarterex.ai-forthslash-webinars.
All three of those webinars are available on demand now, and then you can go to smarterex.ai-forthslash-blueprints,
and you can download, un-gated, each of those blueprints, or whichever one is most relevant
to your work.
So again, thanks to Google Cloud for their partnership to bring these webinars and blueprints to life.
You can learn more about Google Cloud at cloud.google.com.
And I think Mike that covers everything, and I'm missing anything here.
No, that said, that's some good context into the blueprint webinars, and like you said,
we had such a huge audience for those, they were super popular.
All right, and then this episode is also brought to us by AI Academy by Smarterex.ai Academy
helps thousands of individuals and businesses accelerate their AI literacy and transformation
through personalized learning journeys and an AI-powered learning platform.
There are currently 13 professional certificate course series available on demand for master
members and individual purchase.
With more being added each month, we just released our newest course series, AI for financial
services and AI for finance, covers real-world applications of cross-banking, insurance,
wealth management, and more.
That's the financial services one.
So you can start applying AI strategically in your organization today.
So the AI for financial services is part of our industry series, and AI for finance
is part of our department series.
So we have Mike, let me see if I get this right.
We have marketing sales, customer success, HR, and finance at the moment, and we are about
to do operations, IT, and I believe legal to round out kind of the initial functions.
Yeah, so in the next couple of months, we will have the vast majority of departments within
an organization covered, and so they're a great starting point for people in your organization
who maybe haven't figured all this out.
It's a great kind of one-on-one to two-on-one level.
It really gives practical knowledge about it, and that's how all of the certificate series
are structured across departments and industries, and then our foundation's collection, which
has fundamentals piloting and scaling in it.
So great stuff there.
Go check it all out at academy.smarterex.ai.
All right, Mike, I think we've got 15 questions it looks like.
Yep.
We'll see.
We can get through all these in about an hour.
All right, Paul.
So first, someone asked what is the best way to get started with learning all we need
to know and implement about AI, specifically, I believe this person is asking as a CMO.
So leaders, how do I get started wrapping my head around all this?
All right, so this was not a plan plug, but do we not have an AI for CMO's webinar coming
up?
We did.
Yes.
Does that have an announcement or am I announcing that?
No, that has been announced.
Yes.
Don't worry.
That's no secret.
Yes.
I would say, hold tight.
Do you remember when that's coming up?
Yeah, we've got it.
It is March 26th.
That's 12 p.m.
Eastern.
It will also be made available on demand.
So it's again at sparterex.ai, forward slash webinar, as you'll see it right there.
And that's going to come with a blueprint as well, right?
Yes.
There we go.
All right.
So the answer to your question is joining us March 26th for a webinar that actually explains
all of this as well as a download blueprint.
Now, that being said, the way I've been thinking a lot about AI adoption in organizations,
whether it's a team department or organizational level, lately, is the need for leaders to have
a higher degree of AI literacy competency.
What I mean by that is the CMO is going to be the one that's going to have to push the
team to figure out how to apply AI for efficiency productivity, creativity, innovation.
They're going to be the person who's going to have to deal with the employees who don't
want to learn AI.
Don't want to do it.
I mean, the CMO is often overseeing the creative within an organization.
There are a lot of creatives, whether it's writers, designers, video producers, who see
AI as a threat to what they do.
So for the CMO, it comes with, I think, at the starting point, just a deep understanding
of what AI currently is capable of doing, what these different models are capable of doing
across all their, again, not just text in, text out, but audio, video, code, design, in terms
of image production, video production.
So you have to have a deep understanding reasoning would be a really important one when
it comes to strategy.
You have to understand all that.
And then you have to be modeling use of these tools for your employees.
So you don't have to be playing around with all the image generation and video generation
tools and things like that.
But you need to be using it every day.
So I guess for a CMO, the starting point is understand it deeply.
And like our foundation's collection on AI Academy is that, like you could come to our
free intro to AI class if this is all new as a CMO or a friend who's a CMO and it's
all new, go to the free intro to AI class.
And if you're part of our AI Academy, take the foundation's collection.
And I mean, literally 95% of what you need to know at a decent confidence level, you
will get from those three course series alone within the foundation's collection.
So or just come to the free March 26th event and do the AI for CMO's webinar, download
the blueprint.
So that would be mine, my kind of quick advice, anything to add there, Mike, anything
you're thinking of?
But it's just a really good reinforcement of something that we mentioned on the actual
webinar is one of the experts we interviewed at Google, Emma Delrose, it's a manager of
AI transformation there.
She mentioned basically the best organizations doing this that she sees have leaders that
model this stuff, that talk about this stuff.
And that's something one of many things I very much appreciate about your approach is
like you're always telling us all sorts of stuff about how you're using AI.
And it's a massive inspiration and motivation for the rest of us.
Well, I think, you know, not just me, I mean, you do the same thing with the podcast for
sure.
And even internally, I think there's things you're doing all the time to share with the
team.
But that is the key.
And I think a lot of times, even with the podcast, there's things I'm doing where I'm
like, I don't know, this is pretty basic, like I don't know that I should share this.
And then I'll share something that to me is just sort of second nature at this point.
Yeah.
And then I'll go do a speaking gig in some of the quotes I have, some of the podcasts
that example you gave was amazing.
I actually went and built something.
I went and used that and I taught somebody on my team something.
And so I think that there's a lesson in there for all of us that don't take for granted
the knowledge you have and the capabilities you have and the things you're doing with
AI that you might think is basic to a lot of people that might change the way they think
about AI.
So, you know, definitely think about your own use of it and then think about sharing
and modeling for others, how you're doing that, whether it's through LinkedIn posts,
internal Slack messages, you know, running lunch and learns whatever it is.
And I think CMO is being a leader and that is a great way to think about it.
And then if you transfer that to other departments, obviously we're talking specifically about
CMOs because that's the question, but this goes for the leaders of any department.
All right.
Question number two, when we're using this term agent, you know, referring to AI agents,
what exactly is that?
Can you help explain the difference between an agent and just a regular, regularly used
task or prompt that you're repeating with AI?
So phase one of generative AI was text in, text out.
So that was, you know, chat GPT when we first started using in 2022.
You would put a text prompt in and you would get some sort of text output.
Same would then apply to images and video.
You were just putting text in and then you would get an output from the machine.
Agents are AI systems that can take actions to achieve a goal.
So they can, in some cases, develop a plan of what they're going to do.
They are sometimes given access to different tools like the Internet to conduct searches.
And so they can, you can say, OK, I want to produce a research report.
And then rather than just producing a report from its knowledge base, it goes and searches
the web and then it curates information on the web and then it synthesizes that and
then it writes an output that it verifies its sources.
So it's actually going through and doing a sequence of actions.
And so you can imagine if you're, let's say you're going to run a marketing campaigns,
it looks like this question came from the marketing one.
You know, if you say, you go into Claude and say, OK, help me write a landing page and
it writes the landing page.
That's just a simple chatbot, like just creating an output.
They say, OK, now let's go through and let's build an entire marketing campaign around
this.
What's going to go and build a plan of how it's going to do that?
It might need to call on, you know, different knowledge base you get in an access to.
So it's going to go and start taking actions that might be 10, 15, 20, 50 things.
And then it'll eventually create all the final outputs for you.
But it's actually going and doing a bunch of things and taking these actions.
And then you may actually have it set up where it's like, go ahead and do the thing,
send, send the emails, do the ad buys.
That's agents.
Now the confusion comes in with how autonomous these agents are.
Many of the agents you would be using in your regular workflows at an enterprise are going
to be pretty basic in terms of their autonomy.
There's still maybe some rules built in there.
It's like the humans really heavily in the loop.
The stuff you hear about with like open claw that we've been talking about, that's much
more autonomous where people are trusting these things with access to a bunch of information
to just do stuff.
In some cases, leaving them running overnight, we'll talk about Andrej Karpoff.
He had a tweet just this week.
We'll talk about an episode 203 next week where people are on the frontiers, on the edges,
really starting to push the agentic side, the autonomous part of the agentic side.
And that's going to create for some pretty interesting environments.
So question three is something we hear a lot of variations on, not just in marketing.
That's where this one came from.
Do you think AI labs will actually fix the negative impact their technology is having
on the environment?
I think they think they will.
I don't know that you and I might have any specific inside information about how exactly
they'll execute this.
But in essence, just for reference, if people aren't familiar with this topic in context,
a few years back, a lot of the AI companies set out to be carbon neutral, like they wanted
to keep their impact on the environment neutral or actually a positive impact, actually give
back energy.
When AI exploded in 2022, that just got thrown out the window.
It became, build data centers, consume energy, build as much intelligence as quickly as
we can build.
To delay this idea of the neutral impact on the environment.
Now many of them think that if they build more intelligent systems, these super intelligent
systems, those intelligent systems will solve for this.
Right now the way they're solving for it is they're making every year or so the cost
of compute drops like 10X.
So they're making more efficient algorithms that use less computing power per token
about what I guess is the way to think about this.
The energy it took to write, let's say a 10-page research paper a year ago or a 10-second
video from Sora, the amount of energy it would have taken to do either of those things
has dropped 10X probably in the last 12 months.
To do the same thing, now the demand for those outputs is on an exponential.
We are requiring way more energy, now having a way greater impact on the environment because
demand is rising, but they're satisfying that demand more efficiently.
So that is their current path to do it, but they are all looking at solutions in terms
of different energy sources and how to get it more efficiently, including off-earth stuff
where the data centers live and satellites through like XAI and stuff.
So I am concerned about the environment, like many people.
I don't know that there's too much we can all do about it at this point.
The couple of things I've talked about is use the more efficient model and get really
good at prompting.
Like those are two actual things we can all do.
The better you are prompting, the fewer tokens you're going to use to get the output
you're looking for.
That's probably the most, honestly, like the most immediate action most people can take.
Other than that.
I don't know.
I believe in Demosis Abbas and Google DeepMind in particular, and I know that they are focused
on energy.
It's one of the key things they're thinking about, and I think if anyone's going to solve
it, I think Demos has a decent chance to do that in the next five to ten years.
So I don't know.
I choose to be optimistic.
I don't know exactly how it happens.
Yeah.
I would just add there, too, related to the previous question, like, my gosh, you will start
to see how many tokens at the moment agents start using, like, you start running these
in cloud code or something.
I'm like, oh, my gosh, this is highly compute intensive.
Yeah.
We talked about, I think it was the Jensen one quote on Tuesday's episode about open
cloud being like the most important software.
What was it?
It was like the most important software I had ever been created.
And then they opened source diversion of something like that yesterday.
I think we'll talk about it on the show next week.
Yeah, that's the idea.
It's like agents.
So something that's taking all these actions, doing anything requires way more computing
power at inference, which is when you and I would use it, then like a standard text.
Just like video requires more tokens than, you know, text chat, images, things like that.
Agents are going to be a massive drain.
It's why they're building all these data centers and getting prepared for a world where
agents are in everything in a sense.
Question four, it seems like there's an assumption that AI will enable us to produce better
work.
But a lot of us feel like we're expected to take that on faith.
What do you say to people who aren't convinced that AI will lead to better performance or
will make their work more meaningful or more valuable or have a bigger impact, basically?
I think about this, we talk a lot about personalized use cases.
Yeah, I would just say, okay, like if there's something where you're feeling that way, what
is the use case exactly that you're not sure it's on par with like a really qualified
human to do that job?
And I would develop some evals as like the industry, which just means evaluations or benchmarks.
That's like, okay, I write this research report every month or I do this performance report
every Sunday night or I'm in charge of these blog posts or these emails or this proposal
or this talent review or this meeting somewhere, whatever it is that like, you know, just take
a basic use case.
I would make sure you're using the best model available, oftentimes people who have this
concern are using the free versions of these tools and they haven't used the advanced
reasoning models, like a 5.4 thinking from chat GPT versus whatever the baseline model
is today.
So I would say take personalized use cases and then solve for it.
Now, I think more and more, we talk about this move 37 moment.
We touched on it again on episode 201.
I think more and more it's becoming very hard to say that a human just is way better at
a knowledge than a machine.
We'll talk about on episode 203, New York Times just ran an experiment with AI writing
and it was a blind taste test in essence, like here's a paragraph from an AI, here's
a paragraph from a human writer which you prefer and the AI wins.
And it was like, I'll cross like 86,000 votes, I think, it was not a small sample size.
And that's, you know, writing is just one use case, but I think more and more over the
next one to two years, there's just going to be very few tasks left where if you did
a blind taste test that the AI isn't going to be at least on par with a highly qualified
expert in that field, I, it's a very hard reality for all of us to come to, but I feel
pretty deep conviction about that one.
And you know, I really appreciate this question, so I'm not knocking it at all, but when
you ask like, are we, a lot of us feel like we're expected to take that on faith, the
solution to that is just go use the tools and like the tires, right?
You don't have to take anything on faith, you can go try for yourself.
Yep.
All right.
Question five, when using AI as a thought partner is chat GPT's, tend to or another
models tendency to be agreeable and issue here, like how can it give you valuable feedback
if it's configured to essentially agree with you by default?
So the term here, we've touched on the show's sick of fancy, where it's just like, hey,
that's a great question, or that's a great insight, like let's build on that.
And it never tells you like you're an idiot, that's actually a terrible idea.
I would say that the simplest thing is one, they're aware of that issue and they are,
they have adjusted the system prompts behind the models so that they supposedly aren't
that way.
The other is just say it in your prompt, like I would like you to challenge my ideas.
I want you to function as a critic.
I want you to challenge this as though you're this person, like just tell it that.
So even if its system prompt enables it or it's default is to tell you you're brilliant
and that every idea you have is great, just tell it to take the opposite point, steal
me in this position on this idea or assess this writing as though you're an editor at
the New York Times, like just tell it to be critical and it will function in that way.
So that is the fastest solution.
Do you have any tips Mike that you've used with yours?
Yeah, no, I think that when you hit the nail on the head, giving it explicit instructions
to be that kind of skeptical critic to argue with you, whatever level of this you feel comfortable
with, trust me, you will have the opposite problem if you do this right where you're like,
no, I actually like you to go back to telling me I'm brilliant.
Yeah, you'll come away thinking like I'm not good at any.
I did this when I was creating courses for AI Academy last summer and fall.
I built a I learning assist and I've talked about this on the podcast, but in its system
instructions, it was very specifically told I want you to challenge everything I give you.
When I give you a course that I've created and you're reviewing the deck for me,
I want you to challenge my ideas.
I want you to ask difficult questions and it it works and it does like somebody like,
all right, dude, that's a back off like I right, I get it.
But yeah, you got to you kind of turn the temperature down a little bit.
All right, question six, what has been any of the efficiency gains reported by the use of
generative AI in the marketing world? This person said we are jumping into AI use completely,
but in the interim our staff seem to think we have to get this job done the old fashion way.
So I think they're trying to kind of look for some proof here about what are we hearing when
it comes to efficiency gains in marketing thanks to AI?
I think this goes back to the question we answered earlier, Mike, we're just pick your own
benchmark. You can go find all these different reports like Mike and I could give you stories of
things we're doing in two hours that used to take 20 or 50 or 100 and like, you can have all
those stories you want. Just pick something internally, do a pilot of it and say, okay,
traditionally here's something we do every month. It takes us 17 hours on average. We looked at
previous data or we've went through on a test by test level and estimated best knowledge we have
of how long it would take in a normal environment. Then we're going to do this with AI and we're going
to make sure that people are trained to actually use the AI properly, how they handle prompts,
things like that. Then run your own pilot and say, wow, we did it in two hours instead of 17.
So create a few of those and now you've got the business cases. Now you've got the proof
internally and there's just, there's nothing better than your own proof. You're going to recite
all the reports you want just and to your point earlier, Mike, it's 20 bucks a month. Just spend
the 20 bucks a month on the paid version. If you don't have it, get approval. If you're in a
bigger enterprise and you're trying to prove out the reason for buying like a Jasper or writer or
you know, a chat GPT enterprise or Google, Google Gemini for the team, whatever it is,
pick a business case that means something internally and show them that and say we can stack
these and then we can get to 10% efficiency gains, 20%. You can get to like 90, 100 pretty easily,
but no one's going to believe that. So start with believable numbers for just show like,
you know, the reality. Yeah, I don't know about you, Paul, but I want to kind of communicate to our
listeners like I literally speak to leaders and audiences as part of my job and I can tell you
there's no study or stat that's going to like from McKinsey or wherever it's helpful information,
but it's not going to make people wake up more than you saying like, hey, this thing we all are
familiar with. We used to do it this way. Now we do it this way and look at the difference.
Well, and to one of the earlier questions, like if people aren't don't have a high degree of AI
literacy yet, they are, they're inherently going to be like, it can't do what I do. And we hear
all the time. And it can be something as basic as like writing a newsletter or something like,
I can't write the newsletter the way I do. I was like, yeah, I can. I'm sorry. Like, it can. Yeah.
So that's something you have to deal with as well. But I would say again, like focus on individual
use cases, business cases, prove that out through your own pilot, your own data, do a few of
you need to. And then that's actually one of the best ways to drive adoption across other
departments. Like if marketing's leading the way and you want to get sales on board or the success
team or the finance team, just show them a business case of something they do. What about your job?
Don't you enjoy? Like what's the thing you would love for AI to help you with that you'd rather
let go. Don't take the, you know, the thing that they love and care about that gives them fulfillment.
Take the thing that they hate and then show them how to do that.
All right. So Paul, question seven, I kind of selected because it is very related to this.
You touched on this a bit and I kind of just want to close the loop here. How are people tracking
and counting the team time saved by AI? Is it really just that benchmarking your talking
mode? Fire up a spreadsheet and basically start writing this stuff down.
For most organizations, probably, I mean, you and I might come from the agency world where you
tracked everything anyway. And so if you're in a professional services firm, you know,
like a law firm or a consulting firm or an agency, you're probably used to it and you have
benchmarks to look at. Like when I was running the agency, we had 16 years of time data.
So we could go back and look at anything and be like, oh, yeah, the strategy's taken
average 44 hours and the blog post taken average 3.2 hours. And like we knew that if you don't track
time, which in most enterprises, you aren't, then pick very distinct use cases and develop a
benchmark. Or like I said, go through and break any workflow into tasks, any project into a
series of tasks and then at least best case estimate. Like, all right. Like if I was tracking my time,
this planning part would take me about three hours and then to go from there. Now I don't know
if I should probably get into the Fibonacci sequence here. It's probably like overboard. But we used
to use like Fibonacci because what I've learned in 16 years of running an agency is human socket
estimating how much time it takes to do something. It's almost always wrong. And so we used to use
Fibonacci, which is the two previous numbers total that the next number. So like one, two, three,
five, seven, you know, I'm not gonna forget them now. Like 13, 21, 34, 55, 87. I don't know.
Like I used to know them by heart. But we would do that. So I would be like, okay, this is a one
hour. This is a two hour. This is a three hour. This is a five hour. And you just sort of like
ballpark based on it because it goes up by the same percentage in essence each time. So it removes
the human error from like this takes one and a half or points that like nobody knows. And
we get distracted all the time. And so it's like that those those estimates are never accurate.
Question number eight, how do you manage the information on each AI platform? Do you keep prompts
separately? Each AI platform is getting better. And they constantly move positions in terms
of who is the leader. So like, are you porting information between models? Like how have you
managed that? I'd be interested to get your take on this one as well, Mike, because this is a daily
issue for me because I am now actively working in three models every day. So I do use cloud all the
time. I use chat to be T all the time. And I use Gemini all the time. Gemini is baked in Google Workspace.
We are a Google Cloud Google Workspace customer. So that is native right in the productivity
tools we use all the time. But I then use the Gemini app separately. Chat GPT came to market first
with the best model. And so I have a history of three plus years now with chat GPT. I have
a bunch of custom GPT built in there. And so a lot of times I gravitate back to there where I just
have a history and a memory and like I know it's going to do it the right way. If I'm working on
something new, especially if it's a high level cognitive test like a strategy, I will often use
all three models or sometimes multiple models even within the platforms. So I'll use like a
cloud 4.6 Opus and a cloud 4.6 on it. And I'll like compare the differences. So this is very messy
for me right now. I don't have any answer. I don't know, Mike, have you come up with any ways
you're handling this differently? It's getting better, but it's still super messy. I think the first
thing I started doing for the last couple of years is just documenting all of my workflows. So it's
not just about which AI tools. It's like, oh, at this step, we go into this GPT to do the thing. So
it's like even a force game to worse than I had to switch. I'm like, okay, I just need to
congratulate the instructions of that GPT and spin up a project or a gem or whatever. So that's
been really helpful. What's really interesting and I'll see how far this goes is as I use cloud
code much more, cloud code will spin up different skills like these markdown files that tell you
how to do something. It's essentially, you can think of it as a prompt, honestly, your instructions
for a GPT. What's cool about this? I've all those skills being created, updated and logged in a
personal shared Google Drive for like the personal uses of cloud code. So it's like, okay,
even if cloud code got nationalized by US government tomorrow or something, right? It's like,
I can then take those skills and give them to chat GPT or Gemini. It's like my internal
knowledge and skill architecture that could port over. It's not going to be exact, but I've tried
it before and it works. So that's been nice, but also that takes a lot of work to build as well.
Yeah, I don't know if we talked about this on the weekly, but in Thropic last week,
announced an import function where they give you a prompt that you can put into one of the other
models and it'll basically summarize everything in its memory and then like, you give it to
cloud and then it's in theory remembers everything. The other thing you kind of hit on this, but
the way I do this internally, so again, I'm the CEO of the company. I dabble in all the departments
in my contest to this, like sometimes something will bubble up and become a priority for me that
isn't part of my daily job. And so I'll have like two days to grind on something while I'm traveling.
I've shared a couple of stories like this, like our success score for customer success as an
example of this. And so I'll spend like 48 hours and I will get it to a certain point. It's like,
okay, that's it. Like I'm tapped out. I got to move on to like my other CEO priorities and I have
to hand it off to the team. Well, what I'll do is exactly what Mike explained. I will create a
Google doc. I will create tabs within that Google doc. I will put the prompts I used across the
different models I've been experimenting with. I will have a tab for each of those outputs and
then the team can actually go in and see the different flows. And then if I have the time,
I will do a curation of all those outputs into a single here is my CEO stamp of approval,
best summary of all what all the different models are saying and which ones I went and why I did it.
Now part of that I do just for clarity for the team so they can see the thinking part of it is
to model behavior. It's to say, like, listen, I don't want you to just give me an
output. Like if I ask you a coworker of mine to do something for me, I don't just want to see
a copy and paste out of Gemini. Like I want to see what you thought about how you prompted it.
And so in essence, it becomes like an audit trail. And I found those to be extremely valuable.
And I do that now for almost every strategy doc I'm working on. All these like innovations I'm
working on for the company. I start with a Google doc and then I just I journal everything. So I
don't like lose track of which model that I do this in and that in because that happens all the
time too. It's like, what did I work on this at? Which which model was I playing around with success
scoring? And I'll lose track of things. So I also keep a business journal that's just for me.
And I'll note in there, like, hey, I worked on success score these two days. Here's a link to that
doc and here like I because I'll forget I had this happen yesterday. I like found this. I was like,
oh, shoot, I don't remember doing this. This is really good. Like I guess I started this like three
weeks ago and I forgot about it because so many times you can just start all these projects and the
agent stuff is going to make this worse. You can just start things. And then like two weeks later,
you're like, did I do that? I feel like I started that project somewhere. You know, it's interesting.
I preach, quote, increasingly to our teams internally. Like this really unsexy but important skill
is document discipline. If you can manage it because like having this system of effort that
works for you to keep all this stuff, your notes, your context, your knowledge, your working
docs, all in like a consistently organized place. That is how you get value after these tools
because you kind of solve that problem after a while where, you know, as we increasingly have agents
or co-work or whatever, just pointing at folders or Google Drive or whatever.
As long as you have this stuff organized, well, you don't have to worry about like
necessarily where your prompts are or things like that. Okay, question number nine. How can we
balance using AI and putting a lot of company content into AI with balancing that with privacy?
How can we leverage AI in industries that require a lot of security? They gave the example of
science. Yeah, I don't know a way around this other than working through IT and legal. Like they
have to, you know, be not only be in the loop, but probably be in the driver's seat from a
governance standpoint when it comes to sensitive information and confidential information.
Personally, identifiable information, whatever, what I often encourage people is like,
find all the use cases that don't have to touch any of that though. Like IT and legal do their thing
and let them protect the organization and the users and put to generate policies in place that
provide the guard rails for responsible use. But don't let that slow you down from all the use
cases that don't require that data. And there's, I mean, in marketing and sales and customer success,
there's literally thousands of things you can be doing every day, even in a bank or a hospital
system, a law firm, like there's all these uses that don't have to touch any of that data. So
definitely something you have to think about, definitely something you have to collaborate
with the right people internally on, but you need to own finding all the safe use cases that
can let you race forward while they're figuring all this other stuff out. I just, I've talked
away too many companies, even in the last couple months, that are just sitting on the sidelines
because IT and legal still have to approve like every use case or tool. And that is, that is not a
sustainable model in with the rate of accelerated change we are going through.
And I certainly sympathize with how hard that can be to figure out, but I honestly think some
individuals or companies are using this like data thing as an excuse, like to not take action.
It's like you can be doing so much with just the knowledge in your head.
Yeah, and if you hear us saying that and you're like, but I don't know what that is, just come to
the free intro class. Honestly, like, if you just attend the intro to AI class,
you will have the frameworks to go figure this out and, and like, move the ball forward.
Just do not let waiting for IT and legal stop you from making progress.
Question number 10. Are there certain roles or role types within the marketing function that
you envision being rapidly undercut or impacted as AI evolves?
Something I really struggle with how to answer this sometimes. All of them is like my,
if I'm giving the multiple choice and I get an all the above, I would choose all the above.
Yeah. I think the ones that actually affect job security and job opportunity is any entry-level role
that completed tasks. And like, I don't know how to, trying to think like how to frame that.
Like, if you, if someone was giving you a campaign and you were just executing and all you do
for your job is like, build landing pages and write email copy, write ad copy.
You know, if you only do one of those narrow things and you just do it a bunch like all day long,
yeah, you're cooked. Like, that is, that is not a job. One to two years out.
So I think anything that has very narrowly defined role that is this, like, these are the 10 to 15
tasks and they're all, like, AI is really good at all of them right now. And so I think a lot of
times that is the entry level. That's why we're starting to see some early data that entry-level jobs
are very difficult right now because firms like ours, I've said this on the podcast, I would love
to hire a ton of entry-level people. Like, I want to create job opportunities for students straight
out of college. I don't know what they are right now because when I have an idea to build something,
say, I want to build a new app or I want to build a new, you know, score, you know, success score,
whatever. When I do that, I will just then go in and say, okay, great, we finished it.
Now write the landing page, write me the emails, do all these things and then I'm going to hand
that to the marketing team. And so I hand the marketing team a, almost fully baked campaign
that they just need to add it and execute. Yeah. So all the work that used to get pushed down
as the CEO, I do it in like seven minutes. That would have taken seven weeks for the team to do.
And so once you have, you know, managers, directors, VPs who realize they can just click a button
and do most of the work the entry-level people did, that's going to rapidly disrupt the job market.
So I don't know, like, you know, copywriters, obviously a role that's been under attack for a
while here. And I think that's going to continue to be, you're just going to need fewer copywriters.
You're going to need AI forward copywriters. But if you have two of those, that's the equivalent
of like 20 traditional copywriters. So I think that's what's going to happen is you're just going to have
AI infused into a lot of roles. They will evolve, may not have AI in the title. They're just going
to become AI forward versions of whatever that role was. And they're going to be able to 10X. And I
honestly don't even think 10X is an exaggeration at all. It's not based on what Mike and I are
seeing every day in our own company. Yeah. And I would just add to that, something increasingly
goes through my mind. And you know, this may be uncomfortable to say, but not in a negative way.
But it's like with the entry level thing with the kind of like tasks thing, if I'm the one who
has to sit here and give you the workflow or the series of steps, it's like increasingly irresponsible
of me to do that for a human. Like that's got to be something I should be giving to an agent
that can then scale not to replace a human. But like if that is your job, if you're like, well,
okay, I take the steps my manager gives me and go and do those. That's a really dangerous place to
be in, right? Yeah. And I think, you know, we talked about OpenClaw a few times on the podcast.
It's been a little bit more of a technical topic. So we haven't gone super deep on it. But maybe we
should connect the dots a little bit better on an upcoming episode. The way I think about this
is whatever they're doing right now with that, you know, building these swarms of agents that you
can just put a task like direct it to a knowledge base and I can just go do the thing. It is only a
matter of months until like you can see Claude coworker would be an early example of this.
It is just a matter of months until these SaaS companies are selling marketing agent swarms. Like
here's your marketing team out of the box that's got a media buying agent and a copywriting agent
and like all these things that used to live in some of these SaaS companies as templates like,
hey, go in or GPT basically just imagine like you're just paying for a marketing team. And like
maybe they sell it for 250,000 a year or whatever, but it's everything you need. Just plug in.
And these are agents that have harnesses attached to them that like give them what tools do they
get access to one of the system prompts. And then you're literally just buying teams. And I'm
honest to God, I'm not at exaggerating. Like I think by the end of this year, I could absolutely see
companies starting to sell their software in that way where they just pre-bake agent swarms to
do specific things. And that is going to be very disruptive, but it is 100% coming. Like that is,
again, like when I think about things I'll say on the podcast, I generally will only say things
like I have a high level of conviction of that one is like on a one to 10 scale. I'm like an 8 or
a 9 that I think this year you will be able to do that. I'm you could do it right now. I mean,
like if you and I had a week, we could turn Claude cowork into that. Like this is not hard.
This is functionally what we are doing piece by piece. So it's stuff like Claude coder, Claude
cowork when you're building these skills to train it to do these things increasingly autonomously.
It's just harder to do because it's not integrated writing to the systems in the software you
pay for every day. Coding is the canary in the coma. We've said this many times. Like it's everything
is happening in coding first. And then it very quickly comes to the rest of knowledge. And I think
this could happen even faster because half the battle here is these software companies just trying
to figure out the pricing of. It's like once they crack the code on that, we're going to see them
and then honestly, how do you position that? So like let's say you're a CRM company that sells
software and you sell licenses to marketing teams and you find a way to use your CRM as the source
of truth and you realize you can build agents on top of Claude or OpenAI or Geminar or whatever
you're building the agents on top of and you can create these agent swarms that function as a
marketing team. How do you possibly go to market with that message? Because it is a pure
replacement play. We are for X dollars instead of buying 100 seats, we're going to charge you
$100,000 a month or whatever that number is and you're going to have an in-house marketing team
and all you really need is people to be the human and the loop and oversee and guide those agents
and keep them on track. And that's why I think that is where software companies, the legacy companies
could get disrupted real fast is because their inability to say what has to be said that they
are native companies will have no problems saying. They're like a Silicon Valley startup to come
in. Why it's like one of college adic about it. They're going to raise $50 million because they're
going to go after let's say a $90 billion labor market of market, I don't know what the actual
labor market for marketers is, but let's just make a number up and say it's $90 billion a year in
payroll. You just go after it and you get $90 billion, you get what 10% of that's $900 million I
think. That's an easy raise if you're trying to get money from VCs and they're trust me,
it's already happening. Yep, it's all they're doing it. All right, this next question question
11 is a bit sales specific coming from the AI for sales webinar. So this person after when AI
can start making sales calls, won't people associate that with spam robo calls? I can't see how
this would make a cold prospect trust my business because I wouldn't trust any company that had an
AI call me. They can't be bothered to pick up the phone. So I can't be bothered to buy from them.
So certainly one perspective, I'm curious about here, like how do you look at this because we are
seeing a lot of companies start to experiment with this type of thing. Yeah, I mean, I think it's
just a pure numbers game. Cold calling most of us hate it. It does work at some small percentage.
And you know, it's like that, not pure sales person, but it's that the how many calls do I make?
And like we know my numbers. If I make a hundred calls, I'm going to get three people to actually
talk to me or whatever that number is. And now imagine it, it's an agent swarm that's doing this
and they can make 10,000 calls in a day. And like now, maybe you can just, so you're just playing
percentage games. And yes, most people hate it and wouldn't trust that company. But if it's a company
that isn't built on trust, which they're unfortunately aren't many of them that are just in it to,
you know, make money and drive those sales, then they're totally going to use this method and
just flood the market and just play the numbers game, cost them way less to, you know, take shots on
goal with a machine rather than a human. So I'm sure this is already happening again. I'm sure
living this world, but I can almost guarantee you there are, I don't want to turn you on ethical,
like that's probably not fair. There are companies that play this game, just the cold call out
bounce spam game and they play it well. And there's no way they're not playing it with AI right now.
Yeah, for sure. And you know, we've talked about this on past episodes. It's like, okay,
are we really talking about the issue here being AI or the fact you don't like being cold call,
like you just mentioned, it's like think about in a different context via chatbots. People don't
care that chatbots exist. They care that they're bad. If they're good, if these calls are good and
relevant and personal and less annoying, that's a human that knows nothing about you anyway.
Who knows? Yeah, if they can target them based on needs, if they can get the right data set.
Yeah. And they can predict that you're someone who is, you know, a captive buyer of what they
offer. Yeah. And you get them at the right moment. And yeah, it's just like advertising. It's like
a lot of times you turn off ads, but if it's something relevant to you, stop and listen. So,
you're in the market at the right time and they can use predictive modeling to figure that out,
then yeah, who's to say they're not going to be successful doing it.
Question number 12. If someone suddenly finds themselves with so many extra hours each week,
because AI is doing things like handling admin and support work, what strategies would you
recommend to make sure that time is reinvested into real growth and competitive advantage rather than
just more busy work? We often guide to have a sandbox. So, as you're working through your AI
adoption plan, especially when you think about scaling out within an organization, the need to
coach people, like one is you can give them some time back, like maybe they don't have to work as
many hours. But the other is to, and you might have to do this in like a workshop model where you're
helping people ideate, but it's like what other value can I be creating? What are the other projects
that I can be working on? What are new ideas? Like maybe I can take 20% of my time savings,
I can put it into innovation. Like that would be an amazing thing if we had a, you know, a 20%
innovation budget for people's time. And you have innovation workshops every quarter, and everybody
comes up with innovation ideas. And as they save more time, they've got their wish list that
management has signed off and prioritized. Like, if you get time, like these are the innovations
we're really excited about, let's do these. That would be a great use of it. And again, because
I've said this many times, but to me, the only way we slow down the job disruption is through
innovation and growth. And so an innovation growth mindset is essential. And so the idea of
having an innovation sandbox, which now I'm saying this a lot, I've never actually kind of verbalized
it quite in this way. And it makes a ton of sense that I'm saying it. We need these internally.
Having an innovation and growth sandbox of ideas that's like, hey, the thing I thought was going
to take me all week, I actually just did, and it's 11am and Monday, what do I do this week now?
Innovation sandbox. That would be a great use of it.
Yeah, and there was a recent post actually from a software entrepreneur who basically was like,
hey, months are now weeks and days, with the ability of what AI enables. And he's kind of like,
I'm doing our company planning and all this quarterly stuff. We just did it in an afternoon.
So that innovation, getting the wheels turning in that way, you have to ask some kind of insane
questions sometimes. So like, hey, this is my year goal. Could it be done in a day, you know,
something like that. Yeah, this is a hot topic for us because we're having an annual meeting next
week. And we're doing an innovation workshop. Like I'm leading an AI innovation workshop. Mike's
doing one on productivity. We're talking about rocks for the upcoming quarter. And it is like,
you have to things that seem crazy aren't anymore. Whether they're the goals for the company,
what can be achieved in a quarter? So what used to be like, all right, I'm going to have these
five rocks. And this will get me through these next three months. And if I accomplish this,
I'll be great. And as a manager as a leader, I'll be like, yeah, that would be great. But what if
you did those in the first three weeks of the quarter? Right. Right. Because I'm looking at
them thinking those aren't three month projects anymore. And so I do think it again is a mindset shift.
But it's challenging people to think much bigger about what they can do.
Question number 13. There is this larger conversation around the SaaS pod lips, which we talked about
on a previous podcast episode. When do you personally think it makes sense to purchase software
versus try to do things yourself or build things yourself with tools like Gemini, etc. For example,
like if I want to analyze calls for client sentiment, right? Like that's something AI might be
able to do out of the box with the right prompt. Or I may need to consider buying software for it.
How do you look at that? Yeah, I think generally, you're still just going to be buying software in
a lot of these cases. I was trying to find I'm looking for a tweet right now. There was something
I saw, I think it was a smart, I couldn't sleep last night. I was over like two in the morning,
and I was just looking at stuff. Yeah, here it is. So this is, I think I put this on the list for us
to talk about next week. Mike, but I'll just use this now. So, okay. So at a high level,
you're likely still just using the traditional tools like there's a reason they're good. What the more
likely scenario is how you use those tools and your pricing plan may evolve to the point where like
you're going to have agents that are just going to be logging in and extracting information and
you're not going to need as many seat licenses. And so your relationship with that software company
may change. But the reality is most companies aren't going and building a CRM. Now if they're
distinct narrow use cases where you can, you know, vibe code something as someone with no
coding ability like me and Mike, then maybe you are. Like I showed the example of like an org chart
builder. Like I just couldn't find one. So I just built one myself and lovable. That's a more
practical example like these point solutions. Maybe they're for internal use only stuff like that.
So here is a tweet this guy's Todd Saunders on X. Broadlooms, you know, Broadlooms.
Previously at Google. So he tweeted, we all knew this was coming, but today I heard about it
actually happening. A seed stage company backed by a well-known VC openly admitted in a board deck
that their strategy is to get access to a large incumbent software from a customer clone.
The entire clone the entire thing using Claude code and offer it for 90% less.
Not build something better, just copy it and offer it for less. The VC endorsed this as the
go to market strategy and even wrote back and writing that it was a good idea using a customer's
licensed access to reverse engineer product and clone it is ethically bankrupt. I don't know how
us to put it. It likely violates terms of service. It may violate trade secret law as well,
but I'm certainly not a lawyer and a reputable VC putting this in writing in a board deck is
generally insane, but it's going to happen anyway everywhere all the time. I don't know where
this ends, but we all knew this was coming and now it's here. So I don't know, like I say no one's
going to like code us here. But this is how the models work. It's how, you know, the rumor is
that this is how the Chinese are currently they're distilling models from Gemini and Claude and
Cheshire PT by just prompting hundreds of thousands of times in an essence reproducing
the weights and the code to how to build the model. So like you can certainly do it and so maybe in
the US it's going to be illegal, but that doesn't mean other countries aren't going to basically do
that. They're just going to get a license and I don't know. So yeah, from an ethical perspective,
stick with the first answer, the second answer or second piece of context is to just give you a
sense of what is actually happening in the world because it's kind of weird. You know, one file
point that we've talked about a bit in the past I think is like I also wonder, forget building
or buying. I wonder also like how this will just change your expectations of your existing software.
I'm already sometimes frustrated by stuff that's very valuable that we pay for because I'm like I
know that Claude or Gemini can give me a much better answer if I just have that layered over
this data. It's like why is your AI in this software we're paying for not as good? That might be a
good more narrow example to sort of stick on for a second is imagine you have a CRM system and
you want to talk to your data and for whatever reason they have yet to build the agent into it that
makes that super easy to do. So then you're like screw it. I won't pay you on a token by token
basis. Yeah, credits, however you want to charge me to use your crappy agent that doesn't give me
the information I want. I'm just going to connect it to Gemini or Claude and I'll get the data myself
and I'll just use that and now I'm going to pay them the tokens. So that's a more realistic thing.
It's like these narrow plugins or use cases where you're just going to like I'm just going to
always track the data myself. I'm not going to wait for the software company. If you do that enough
times over enough use cases, then maybe you don't need that software anymore. Yeah.
All right, question number 14. I'm nervous about others using AI agents irresponsibly while it's
connected to my personal information. Is there anything we can do to protect ourselves from others
who are basically implementing agents without guardrails? Join the club on this one. I'm also
very worried about this. There's we'll share a story on Tuesday in the weekly about like an
all-hands meeting at Amazon when their agents apparently went haywire and racked a bunch of
code at AWS. These things are not reliable and in many cases they are not safe and people
are definitely racing ahead and using them regardless and we all could be collateral damage in that
grand experiment. I don't know how to protect ourselves other than the traditional ways we would
monitor our personal information and our credit scores and things like that. I, you know,
having those fraud alerts set up personally like it's probably a lot of the traditional stuff.
It just becomes more important over time. I would imagine from a business perspective,
you'd probably want to be talking to your insurance agents about liabilities related to these
things and how to protect yourself and your employees. Maybe your employees mistakenly use agents
to do things like it's a whole new world. And I think even asking this question is a good starting
point for people. All right. Our last question here Paul question 15. When you're having conversations
with leaders, what's your approach to communicate the fact that IT shouldn't ideally be the one driving
AI adoption in the organization? It's not IT's job. IT's job is not business strategy. It's not
rescaling and upskilling people and dealing with change management. This isn't a technology thing.
This is a complete business transformation that has to be fueled by AI literacy,
you know, from a leadership level down. And then you have to be able to personalize use cases.
You have to be able to communicate with people who are afraid like there are so many layers
that have nothing to do with IT. IT's there to keep people safe to use the technology responsibly
to reduce risk to make sure the data stays secure. They play a critical role. They are not the ones
that should be telling marketing how to use Google Gemini. That's not their job. So I just think
you lay out what are the goals of our use of AI technology. What are the sample use cases? How
are we going to infuse it into workflows? And none of that is IT's job. So I would just lay out
what needs to happen for true AI adoption and transformation. And it's very apparent at that
point that IT plays a critical role, but it is not to guide the strategy.
All right Paul, that's our 15 questions for this episode of AI Answers. Again, if you need a
reminder, go to smarterx.ai forward slash webinars and you can check out each of these awesome
webinars we did. Go to smarterx.ai forward slash blueprints. You can go get non-gated copy of each
of these blueprints for marketing, sales, and customer success in partnership with Google Cloud.
We're so appreciative to them for making all this possible as well. So Paul, thanks for my first
AI Answers. Yeah, just FYI, like Kathy will be back for the next day I answers, but Mike and I
did these webinars together. That's why Mike and I ended up doing the AI Answers together.
So Mike and I will be back with episode 203 of the podcast on Tuesday. So thanks for joining us
for this special edition and we will talk to you again next week. Thanks for listening to AI
Answers. To keep learning, visit smarterx.ai where you'll find on-demand courses, upcoming classes,
and practical resources to guide your AI journey. And if you've got a question for a future
episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great
questions about AI.
The Artificial Intelligence Show



