Loading...
Loading...

You're polluting the world with AI Workslop and you don't even know it. 🗑️
In a world were everything is free and fake -- or, AI -- it's easy to just throw unlimited spaghetti at the wall and see what sticks.
But there's a downside in just blindly rubber stamping those generic outputs from LLMs. And it's worse than the workslop epidemic. It's losing trust.
So, how can your company survive and thrive in an AI world where everything is fake?
Tune in and find out.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
Timestamps:
00:00 "Navigating AI-Driven Distrust"
04:01 AI, Jobs, and Fake Realities
06:35 "AI vs Expert Content Quality"
10:09 AI-Driven Online Interaction Surge
14:41 "Trust Fading in Imperfect Brands"
16:31 "AI Literacy: Bridging the Gap"
19:18 "Elevating Expertise in AI Workflows"
22:58 "Context Engineering for Domain Expertise"
28:11 AI's Impact on Blog Quality
29:21 "Fighting AI Work Slop"
32:40 "Everyday AI: Join & Explore"
Keywords:
AI-generated content, everything is fake, AI workslop, work slop, AI slop, trust crisis, deepfakes, synthetic media, fake landing pages, fake customer service, AI-enabled fraud, voice cloning, agentic AI, discourse bots, domain expertise, human expertise, context engineering, content detection,
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Start Here ▶️
Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and all episodes: StartHereSeries.com
Also, here's a link to the entire series on a Spotify playlist.
This is The Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips.
Listen daily for practical advice to boost your career, business, and everyday life.
Everything is fake.
The text you read on that landing page, that amazing photo you saw on Instagram, the viral video on Twitter and all of those comments
that customer service rep you talked to last week that was surprisingly chipper and actually helpful.
There's a good chance all of it was fake or at least AI.
And I think we're about to cross over into a very dangerous place where the majority of our day-to-day interactions
will be engaging with some type of media or medium that's either partially AI generated or completely AI.
Yes, there's the fast emerging threat of deep fakes and fraud that's not going to go away.
And while I think there's a whole other episode to be done on the impacts of the everything is fake disease that has on us as individuals,
there's probably a more important ship that you have to write right now.
That's how your company actually uses AI but remains human.
See, I'm someone that encourages AI use like all day every day but I know most people and most companies don't take the proper care in elevating human expertise while using AI.
And that's led to this onslaught of workslop in 2026 or the never ending barrage of generic sounding uninspiring outputs that your company may be rubber stamping.
So how can you fight back against the year of the fake?
How can your company resonate in a sea full of mundane workslop?
And how can you make sure the expertise of your brightest people doesn't get drowned out by identifying all of your business processes?
Well, we're going to talk about that on today's episode of every day AI in our start here series.
Everything is fake and how your company can leverage human expertise and fight the AI workslop.
So here's the big picture.
Right now there is an AI trust crisis. It's already here.
So a recent study from Euro poll projects that up to 90% of online content may be synthetically generated by the end of this year.
And that means that trust is essentially gone.
A new Salesforce state of the AI connected customer survey said that 72% of consumers trust companies less than they did a year ago.
So if almost everything that we see or read online is going to be AI and people don't trust companies.
What's the answer?
Well, it's elevating your human expertise while still using AI.
And that's what we're going to talk about on today's show.
So if you stick around for the next 20 or so minutes, here's what you're going to learn.
You're going to learn why defaulting to everything is AI generated is now the sound professional discipline and not cynicism.
You're going to learn how AI works workslop is silently destroying trust and revenue before any dashboard shows it.
And you're going to learn what the most AI native companies do to stay human, authentic and competitive at scale.
All right, let's get into it. Welcome to every day AI and our start here series.
This, if you're new, well, it's the essential podcast series to both learn the basics of AI.
But if you are a regular to double down your knowledge.
So after doing 700 plus episodes of every day AI, I didn't have a good answer when people always said like, hey, Jordan, you have a lot of information out there.
Where do I start?
Well, you start here with the start here series.
I think it's best if you listen to all of these in order.
There's shorter episodes usually about 25 to 30 minutes.
But if you want to catch up on all of them in order, I suggest you go to start here series dot com.
That is going to give you free access to our inner circle community.
And in our start here series space, you can go and listen to all of the episodes in order.
Read about them.
There's a playlist that we keep updated there and everything else.
So if you missed our last episode, we went over the AI labor shift when it will happen and what it means for jobs.
So let's talk about why everything is fake and why I think this is actually one of the biggest problems that most people don't know that we're fighting against.
So again, when I tell you this, I'm not trying to say this is some, you know, oh, like bragging about something.
In many instances, because I am drowning in AI every day, I feel that I pick up on certain trends and topics, maybe a little bit for the before the average, you know, business user.
And what I've seen over the last six months is this onslaught of work slot, right.
And I don't know if the average every day business consumer has realized this.
Maybe you have, if you spend any time on, you know, LinkedIn, Twitter, you've probably already seen this.
So I think it's already impacted the written word.
But I think it's going to impact everything, right.
So as vibe coding becomes very commonplace, it's going to impact, you know, all of the apps we use, there's going to be app slop.
There's going to be video slop, right.
But essentially, I think the smart business consumer needs to realize that this is just the future.
And there are things that we can do to fight against it.
So let's start here.
Right now, no one really knows when or if something is a I generated, which is maybe a good thing, right.
If you're thinking, oh, well, I can use AI at scale.
Yes, you can.
So when something is done correctly, it is very hard to tell the difference between, you know, an expert who is using AI correctly at scale versus a team of maybe 20 humans who aren't using AI, right.
As a former, you know, journalist, I was a Pulitzer fellow, right.
I won the ACP store of the year.
So I was a pretty decent journalist way back in the day, pretty good at writing, right.
I think writing we've already far surpassed what the average largely which model can do.
But there's a big gap there.
Because when I say that a lot of people won't believe me and they're going to be like, no, what comes out of, you know, chat GPT or Gemini or Claude or, you know, go pilot, whatever is kind of garbage.
No, it's, it's not.
It's better than what award-winning writers like myself can do if you know what you're doing.
But that's a big if and that is elevating the human expertise, right.
Because if someone goes in and doesn't put a lot of care, they don't understand the basics of context engineering or understand how large language models work.
They're not going to be able to produce something that's economically valuable.
They're going to produce work slots.
So that is the generic output that you get from, you know, just trying to either take the shortest way out or try to get the quickest output.
But right now humans can't tell the difference between AI content that's produced by an expert and actual, you know, experts created good human only AI content.
In a study from Boringa last year show that people claiming to confidently be able to spot AI images only scored about 30% accuracy.
So yeah, even when people are like, oh, I'm sure that this is or is not an AI image.
No, also self-reported confidence in AI detection is rising every year while actual accuracy keeps declining.
And this is not something that your team can just solve with better instincts or better tools.
So what do I mean by this?
The quality of what AI can produce.
I keep, I've been saying this over and over the last couple of months.
The quality in the, the, the, just scale that we've seen in the last three months has faster past what we got the three years prior.
And that's what I think has gotten us to this crisis of well, everything's fake because you can't tell, right?
If someone knows what they're doing in, you know, nano, banana pro or, you know, seed dance for video generation, VO for, you write all of these platforms.
If you're using the best platform for image editing, video editing, text generation, web, right for software engineering, agent coding, the output ultimately if you have an expert driving it.
It's pretty much indistinguishable, which is crazy to say that even video, right? We never thought we thought, you know, back when we in the early dolly, one dolly, two days, you would have said, oh, you know, AI videos 20 years away, right?
But here we are, you know, five years after that point in time.
And well, no, it's already, you know, gotten to the point of human level again, if you have an expert human driving it.
So when everything is fake, you can't look over fraud. That's only one concern, but I think you have to understand this.
So I am speaking to this from a couple of different ways.
For consumers, I think today's show may be helpful that you need to change your mindset and assume literally everything is fake.
Because AI, right, I've gotten a lot more comments recently, right, that, you know, this podcast or myself, AI, no, I've literally had a cold for off and on for like three months, right, but I've gotten all these comments recently.
It's like, oh, no, this is, this is no, it's, I don't think AI can quite ramble on like I do not yet or yet simulate my stuffy nose or horsey throat.
Right. So no, if you're listening on the podcast, this is not AI. I'm a real human doing this live and unedited, unscripted. My voice has just been gone off and on for three months.
But I think it's a good thing for consumers to have, right, when you were having that conversation with someone over the phone, right, when you're watching something or, you know, probably the biggest thing that no one's talking about.
Just the right agentic AI, right, with all these, you know, the open claw variants out there, probably anything you read online, even discourse, right, people debating things on, you know, Twitter, LinkedIn, Reddit, Quora.
It's probably mostly going to be bot driven, I would say, within a couple of months and people aren't going to understand that.
So yes, I think this is important for consumers to understand that pretty much anything you interact with will probably be fake.
But I think it's also as business leaders. I think it's important to understand, right, that, that email you got from a client. Is it real or is it not, right?
You know, pretty, pretty soon, right. I'm getting this all the time. I'm getting emails from AI agents. Sometimes they disclose it. Sometimes they don't. I've gotten video pitches from, you know, clearly things that are AI and people think that I may not know.
People think that I maybe I can't tell the difference, right. So yes, this does lead to problems on the consumer side and on the business decision maker side, but also fraud, right, company wide. I think you have to be paying attention to this.
So in the experienced future of fraud forecast that came out this year, 72% of business leaders identified AI enabled fraud as a tap operational challenge.
I think maybe two years ago, I think most people understood that AI was was a huge threat, right, because what you could do with with deep fakes, things like that voice cloning, obviously a huge concern. But now the barrier to do this, the technical barrier in the technical know how is essentially zero.
You know, anyone with that can read within 30 minutes could probably figure out how to do this at scale voice cloning, you know, AI videos that look very real.
It's very easy, right. These fraud tools are now free. They don't require any technical skill. Then they allow complete autonomy and in an in an enemy as well.
So anything that you is important for your day-to-day business operations, so vendor proposals, you know, job candidates, executive communications, they can no longer just be verified by site alone.
I think it's important that we come to that realization that, you know, these, these routines that we've gone through throughout the years, checking email, right, hiring.
Oh, yes, looking at these resumes, right, probably we've already seen this over the past two years. Now you assume every single resume is written by, you know, chat you with your clutter, Gemini or something, right, but the same thing with videos as well, audio, you know, interviews, similar.
There's a flip side to that to this deep fake issue, right. And this is again, maybe good conversation for another episode where we can go deeper on this, but even proof of things that actually happen.
I think that's going to be hard to prove, right. That's called the liar's dividend. That's when real data and real expertise gets lumped in with AI spam, right.
Kind of how, you know, people have been saying, oh, Jordan, you're, you know, because my voice has been going in and out in my audio. It's a little weird because of that.
People are saying, oh, no, you're AI, right. No, that's the liar's dividend. But I think it's important to realize and understand as well.
And because of AI content detection, right, which is not real, right. There's obviously things that are a little bit better and a little more reliable.
You know, you know, you know, you know, you know, you know, you know, you know, that's a problem because I think that your company's real expertise can actually be dismissed as AI generated.
Just because there's zero record.
And that's led to this trust crisis that if you don't know it's happening right now.
Right. So like I said, that sales force study that said that 72% of people trust companies less this year than they did last year.
That is a drastic drop.
Right. And that has led to, I think, or is the result of maybe workslop. Right. I think consumers are trusting companies less.
Number one, if you are putting out low effort content at scale, which I think so many, so many people are in that has led to workslop.
But number two, if you're not putting a human face, if you're not putting imperfections out in the world, that's one of the reasons all the long I wanted to write before I even started every day, I more than three years ago, I said, I'm not going to edit this thing.
I said, I'm going to, I'm going to go on, I'm going to do this thing live because I knew at the time I'm like in a couple of years, you know, this is all going to be AI generated.
So I want to get my imperfections, my, my stuttering, my sniffing, all of it, like that's, that's human, that's authenticity. And I think that's one of the reasons why, you know, this, this podcast has, you know, kind of grown in popularity over the years because it's real.
It's authentic. And I think that brands need to understand that too because of the trust crisis. Right. So even good example, three major dictionaries named AI slop there were the year in 2025. Right.
And workslop, I think, is going to trend in that direction. If you haven't heard of workslop before, right. That's just AI output that's technically competent, but it carries like no domain expertise.
It just sounds like bland. Right. And social media is already a graveyard of enterprise workslop. Most companies don't even realize that they are actually contributing to this.
But here's three reasons. I think that companies are still falling into the workslop trap. And again, we're going to get to how you can overcome this with human expertise.
But the three reasons are education, cost and accuracy. All right. So education. So recent McKinsey study said that only 6% of companies qualify as AI high performers generating meaningful business impact.
This is everyone's using AI, but how many people out there have actually learned have actually taken courses are actually taught, right. We use large language models every day.
Does anyone know top K, right, top P temperature, right. Not that those things are super important, but you should at least know what those things are. Right. For the things that we're using every single day.
This is the equivalent. This is the like 2010 equivalent of, you know, sitting in front of a computer as most knowledge workers do in 2010 and not knowing what an email was or not knowing.
I don't know what a URL was. It's like, no, you know what a URL is. That's the thing you put in the bar. You type it and you click the button. Right. It's, it's how it operates and how it works.
Anyways, education is the first reason. Number two, cost. The economics of AI are irresistible and undeniable. AI generates quality, usually or passable outputs in seconds at a fraction of the human cost.
And then accuracy. I think it's the third reason that we have so much work slot because so much of the day to day deliverables fall into a gray area. Right. I think good enough passes for good.
All right. Let me say that again. And how many times, if you're being honest with yourself, if you've just blindly used AI output and you've looked at something, how many times have you said good enough? Right. Whereas before AI, when you were doing it manually.
I don't think good enough would ever pass for good. Now it does. In an age of AI, right. We're like, oh, that's good enough.
So we've talked about so far. Why work slot is on the rise, the impacts, maybe personally on consumers, fraud, deep fake. But here's the cure.
It's elevating human expertise inside of your AI setup.
You need strategic oversight from experts. So Smith OS reports found that AI content with human strategic oversight performs more than four times better than fully automated outputs out of AI. Right.
This is what makes decision makers click yes or no on that proposal.
Four times higher, a four times better result. When you properly put in human expertise, when you properly elevate those domain experts at the right place of the right time in your AI workflows, which unfortunately isn't being done. Right.
Because I think when AI made content creation nearly free, I think that instantly elevated domain knowledge to be the competitive vote overnight.
And I'm talking about basic text to text, large language models, all the way through fully autonomous, you know, multi agent orchestration, everything.
It's elevating the right domain expert at the right time in the process, because the winners right now, those that are getting the most out of it are pouring so much human expertise in in their AI that the outputs are unmistakably.
There's alone, right.
And if you listen in our start here series, we talked about a little of this in context and the context engineering episode.
But I think it's worth maybe repeating one or two things from that.
AI moves too fast to follow, but you're expected to keep up. Otherwise your career or company might lag behind while AI native competitors leap ahead.
But you don't have 10 hours a day to understand it all. That's what I do for you. But after 700 plus episodes of everyday AI, the most common questions I get is, where do I start?
That's why we created the start here series an ongoing podcast series of more than a dozen episodes you can listen to in order.
It covers the AI basics for beginners and sharpens the skills of AI champions pushing their companies forward.
In the ongoing series, we explain complex trends in simple language that you can turn into action. There's three ways to jump in.
Number one, go scroll back to the first one in episode 691.
Number two, tap the link in your show notes at any time for the start here series. Or you can just go to start here series dot com, which also gives you free access to our inner circle community where you can connect with other business leaders doing the same.
The start here series will slow down the pace of AI so you can get ahead.
It's all about making sure that you have the expert lined up with whoever is building whatever AI flows. Here's what I mean by that.
I think so many times and so many organizations that I talk to. I say, okay, you have this great, you know, scheduled AI flow, you know, it's great. It's hands off. Cool. Great. Who set it up?
It's always usually someone technical. Right. Usually maybe companies have an AI champion or two. Maybe it's someone in IT. I think I see it more on the IT side for windows co pilot enterprise organizations.
But usually it's someone who's technical or an AI champion. But who's actually benefiting from it? Who's going in and, you know, copying and pasting and updating it? It's usually not that person. Right. I would venture in larger organizations.
I would say less than 10% of the output that ultimately gets used in a deliverable in an end artifact is coming from someone with domain expertise.
With very little or zero input in the front end or the back end. And that's where it's the most important. It's usually a technical person who's setting it up or an AI champion or something.
You know, we found this on the internet. This look good. Let's plug it in and it's good enough.
But the winners are the ones that are pouring more human expertise because like we talked about in our context engineering episode generic prompts produce generic output in your proprietary data.
Your first person or first company reasoning decision logic that changes everything. And you need to start if you haven't already.
You need to document why your company makes decisions, not just what decisions it makes. In the same way, I talk so much and I know you guys have probably annoyed me saying this. You need to spend more time looking at the chain of thought in a large language model.
Then you do on your front end context engineering or whatever you do on the back end copying, pacing, producing a document. Right. You need to be able to iterate multiple times. That is how you properly insert your company's domain expertise is through the proper context engineering process.
And if you took our, you know, prime prop polish PPP course inside our inner circle circle community, you already know this. But that's where you insert your domain expertise is going through how a certain large language model will tackle a certain issue.
And you have to be using the right model, the right feature of the right mode for the right problem. Right. And that might look a little different depending on what sector you're working in.
But the chain of thought in sitting down, right, even if you're, you know, if you have a certain setup or hey, someone from IT or, I don't know, someone from marketing has to set up a certain thing. Well, they need to be sitting down with the actual domain expert looking at the chain of thought and saying, hey, according to our SOP, according to our skills that we set up, right.
This is what the input is on the context engineering side. This is what the output is. But more importantly, let's walk through the chain of thought and let's see how the model tackle this problem. Right.
And documenting the why that is one of the most powerful things that companies that are waiting right now, that's why.
And that's just context engineering reframed as a trust strategy, not just for productivity and proof is that going to make your company more productive doing that.
Absolutely. Right.
But that is ultimately going to create more trust in the end.
That is going to be that that process right there, right proper, proper, proper context engineering using the right model, you know, education, trading, all that.
But having your domain expert be involved in the auditing and the iteration, looking at the chain of thought, that what is going to ultimately lead to less work slop, more trust in more ROI for your company.
And this is again, why I will continue to call out how bad human in the loop is. Right. I've been saying this. Sorry.
Way before everyone else about how human in the loop is bad. It's one of the worst things I think that has happened in or around the AI industry, aside from just an overall general lack of education, training in basic knowledge.
Human in the loop is bad. Human in the loop leads to work slop. Human in the loop is going to continue to contribute to the everything is fake dilemma and it is going to be bad.
We need as again, if you are a long time listener of this podcast, you know, maybe it's the first time listening.
Well, let me introduce this to you expert driven loops are what we need human in the loop means that you can just put any human passively into any AI system and that's some sort of guard rail that's garbage.
That's failure. That is work slop. You need expert driven loops. That means the domain expertise, the domain expert driving the loop. Right. And what does that mean?
Usually when I talk when anyone talks about a loop, we're talking about in agentic or semi agentic or AI powered workflow, something that happens in your company repeatedly, right over and over again, that's a loop.
And expert driving it proactively, right, because gardener right now predicts that companies that replaced human agents with generic AI will be forced to rehire by 2028.
And I think human in the loop is just a passive checkpoint, but expert driven loops, I mean experts are actively shaping the context and reviewing against professional standards.
This is why I think it's so important. And we talked about this in previous episodes to have internal company benchmarks and internal scoping that you are doing not yearly, not quarterly, but probably monthly or by monthly.
So work slop is only going to get worse and more prevalent. And I think this is important because we are going to have AI slop heavy training that this is something I think people very much overlook.
So right, if we go back to one of our initial stats from Europol that projected that up to 90% of online content, maybe synthetically generated by the end of this year.
And I would say it's probably going to be more than 90% if I'm being honest. But what does that create? I think that creates trust in AI that we probably shouldn't have.
Because even if you just looked at the relatively quality, like the relative quality, I don't know if there's a standard for this or a benchmark, if not, I hope someone can create it.
Right, but if you looked at the relative quality of something that was produced on the internet in 20, I don't know, we'll say pre AI, right, because even before chat GPT, you know, AI slop was a huge problem, right, the early GPT technologies that predated chat GPT were, you know, already out there polluting the internet with garbage with AI work slop before anyone knew chat GPT existed.
I think now, right, the average piece of, I don't know, we'll just say blog posts, right, because that's a lot of what goes into trading data.
The average quality of blog posts, I would say has gone down by two to three X very easily over the last 10 years. And it's because of AI, right. So that just obviously creates this regurgitated cycle of poor or poorer quality source materials.
So I do think that even more so in the future, yes, the training at the companies is getting better, right, the reinforcement learning with you and feedback and other scaling technologies at the big AI frontier labs helps with this, you know, helps to make sure that the training data is higher quality and the training process is better yet still.
It is hard for certain people to understand the differences, right. It's a lot of time takes a niche domain expert to be able to differentiate between something that's good enough versus something that is actually humanly good, right.
And I do think that this creates a future where not only work slop is more prevalent, but the baseline of what you would get out of a large language model with decent context engineering is probably going to be well a little sloppy,
sloppier than it maybe was a year ago. So here's what you need to do. Here is the road map, all right, on how to get over the everything is fake dilemma and how to properly leverage your company's human expertise to fight AI works law.
You need to audit every customer facing or potential client facing output and flag anything that's generic or unverifiable.
So that's anything that's currently in production anything on your website, anything on your pitch decks, anything, you know, in your emails, anything, right in anything and draft version that maybe didn't make it to publication, then you need to identify your strongest experts in those areas.
You should first, you know, flag anything and then categorize all those things, then you need to identify your strongest expert and capture how they would actually say this, right.
Let's say I don't know. It's a sales page for a certain product or service that's you just started selling and it sounds kind of generic and it's like, well, this sounds like a lot of nothing, right.
Get your people in there and have them tear this apart. You know, they're probably looking at and be like, look at this garbage, the marketing department put out, right.
I can say that I've been in marketing for a while, right. They maybe don't like it, right. Or maybe it's a company that is producing this for you, right. A lot of companies use third party agencies.
You need to get you audit everything. You need to identify your strongest people and capture how they would actually solve that or describe it.
And then you need to build AI systems that bring those people in routinely. It's not a one time thing. It's not a, you know, wash your hands once and you're done.
You need to make sure that you build in that domain expertise in your AI operations. That is an expert driven loop, not just having some random human say, yeah, this is good. That's workslop and it's crushing your company.
All right, that's a wrap for this episode in the start here series. Everything is fake and how your company can leverage human expertise and fight AI workslop.
I hope this one was helpful. If so, let me know. Well, let me know by going to start here series.com. That is going to give you free access to our inner circle community. Yeah, you're not going to literally find it anywhere else.
FYI, even if you try to find it, you're not going to find it. That's going to give you free access to our start here series. You can go read and listen to all of the episodes in order.
We have a Spotify playlist inside the community as well. So that's it for today. I hope this is helpful. And thanks for tuning in. Hope to see you back tomorrow and every day for more everyday AI. Thanks y'all.
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going.
For a little more AI magic, visit your everyday AI.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.

Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast
