0:00
[♪ OUTRO MUSIC PLAYING [♪
0:28
Task by Task. The workflows were handing to AI one decision at a time by Sean
0:35
Martin, lens4 at SeanMartin.com. I look at the intersection of business,
0:41
technology, and messaging regularly through three lenses. How organizations operate
0:47
and run their programs. How innovation and market forces are reshaping what's
0:52
possible and how the language and narrative around technology shapes what gets
0:56
funded, prioritized, and trusted. This week all three lenses are pointing at
1:01
the same thing and the picture is clearer than most people are comfortable
1:05
admitting. Nobody decided to remove the human from the workflow. That's the
1:10
part worth sitting with. In boardrooms, in budget reviews, in vendor evaluations,
1:15
nobody stood up and said, let's build a business process with no humans in the
1:20
loop. What happened instead was a series of smaller decisions, each of them
1:25
reasonable, each of them local, each of them defensible. An HR team bought a
1:30
screening tool to handle application volume, a legal department licensed an AI
1:35
drafting platform to reduce outside counsel spend. A finance team deployed
1:40
automated invoice processing to close faster. None of those decisions on their
1:45
own looks like giving up control. But map them together, task by task across a
1:51
single workflow and something significant emerges. The human is already
1:56
optional across most of the process. That's what I want to examine this week. Not
2:02
whether AI should take on more of the work. That debate is largely settled in the
2:06
data. But whether organizations have consciously mapped what they've actually
2:10
handed over and what that means for how businesses operate, compete, and carry
2:15
accountability going forward. Let's start with the business operations lens. Are
2:21
we delegating efficiently or giving up control? The honest answer is both and most
2:27
organizations can't tell the difference yet. Let me trace two workflows that most
2:32
businesses run every week, not as edge cases, not as experiments, but as normal
2:37
operating processes with deployed tools and real outcomes. The first is the
2:42
hiring workflow. A job requisition opens. What happens next used to require a
2:49
recruiters judgment at every step? Here's what the same process looks like with
2:54
current tools. The first task is resume screening. Unilever reported saving over
3:00
1 million pounds annually after deploying AI screening tools across its
3:04
hiring pipeline. McDonald's rolled out paradoxes conversational AI called
3:09
Olivia across thousands of locations to handle applicant screening and
3:13
scheduling. Candidates move from application to interview without a human
3:17
recruiter touching the file. AI tools now rank and filter applicants in
3:22
seconds against criteria, a human set once, and that now runs autonomously at
3:27
scale. The second task is interview scheduling. A large US financial services firm
3:34
using good time reduced time to fill by weeks by automating calendar
3:38
coordination alone. The moment a candidate cleared screening and interview
3:43
invite went out within hours. No recruiter coordination required. The third
3:48
task is first round interviewing and assessment. Higher view used by JP Morgan,
3:53
Goldman Sachs, Amazon, Microsoft, Emirates Airlines, and hundreds of others
3:59
conducts asynchronous first round interviews with no human present. The candidate
4:04
records answers to preset questions on their own schedule. The AI analyzes speech,
4:10
language, and behavioral indicators and returns a ranked score. No recruiter
4:15
watches the recording until after the AI has already filtered and ranked the
4:19
pool. Emirates Airlines reduced its hiring cycle from 60 days to seven using
4:25
this approach. The human interviewer enters at round two, but by then the AI
4:30
has already determined who gets that meeting. The fourth task is offer
4:35
generation and outreach. Recruiting platforms including Lindy and recruiter
4:40
flows agent mode draft, personalize, and send offer communications and follow
4:45
up sequences autonomously. The offer letter is written before a recruiter opens
4:50
their inbox. The fifth task is onboarding initiation. End-to-end workflow
4:56
automation, deployable today through platforms like N8N, covers the full
5:02
pipeline from CV submission through assessment, scheduling, and status
5:06
tracking without human intervention at any step. Five tasks. Five separate
5:13
vendor decisions. Each one made independently, each one with its own ROI story.
5:18
And together, a process where a candidate can move from application to offer
5:23
without a single human making an active decision along the way. Now let's run
5:29
the same analysis across a legal department's standard contracting process.
5:32
The first task is matter intake and triage. Checkbox AI handles incoming legal
5:39
requests through intelligent chatbots that capture context, ask clarifying
5:44
questions, and route matters to the right team automatically. No paralegal
5:49
spending the morning clearing an email queue. The second task is legal
5:53
research. Harvey AI, now embedded in AM law, 100 firms, surfaces relevant
5:59
case law, statutes, and precedent across large document sets in minutes.
6:05
Lexus Plus AI provides contextual legal reasoning on demand. What used to be
6:10
a junior associate's full day is now a prompt. The third task is contract
6:15
drafting. Spellbook drafts contracts inside Microsoft Word. Legal on users
6:21
report NDA reviews dropping from two hours to 30 minutes. One managing
6:26
partner reported a 40% increase in billing capacity, not from doing
6:30
better work, but because AI wrote the first draft on every matter. The fourth
6:35
task is contract review and redlining. Luminance identifies anomalies and
6:40
flags deviations from playbooks across thousands of contracts simultaneously.
6:46
Kira extracts specific clauses at scale. Across the category, AI contract
6:51
review tools are reducing review time by 75 to 85%. A task that defined legal
6:59
practice for decades is compressing toward minutes. The fifth task is approval
7:04
routing and post execution management. Contract Poday handles routing,
7:09
obligation tracking, and compliance monitoring after signature. Ironclad
7:14
manages the full contract life cycle, renewals, explorations, obligation
7:20
triggers, without a paralegal maintaining a spreadsheet. Again, five tasks, five
7:26
products, five separate procurement decisions, and end-to-end, a contracting
7:32
workflow where a contract can move from request to executed agreement without a
7:37
lawyer authoring a single original clause, and this pattern runs across the
7:41
business, not just in these two functions. Finance has it. AI invoice processing
7:47
platforms, handle capture, validation, approval routing, and payment scheduling
7:52
end-to-end, with one hospital association reporting a reduction in batch
7:56
processing time from 10 hours to minutes. Customer service has it. Gartner
8:02
projects that agentic AI will resolve 80% of common customer service issues
8:07
without human intervention by 2029, up from effectively 0 in 2024. Security
8:14
operations has it, and I've had direct conversations with the vendors
8:18
building these tools. Edward Wu, founder of Dropzone AI, told me ahead of Black
8:24
Hat USA 2025. Nobody wants to be a Tier 1 analyst forever.
8:30
Subuguha of Stellar Cyber, described a digital army of AI agents that
8:34
filter 70 to 80% of alerts before a human analyst sees them. The pattern looks
8:40
the same whether the workflow is closing a contract or closing a security
8:44
incident. The business question this raises isn't whether the tools work, most of
8:49
them do. The question is whether organizations have a clear deliberate answer to
8:54
which tasks require a human decision, and why? Because right now many
8:59
organizations are answering that question by default, one purchase at a time
9:03
rather than by design. Gartner puts a number on the trajectory, at least 15% of
9:10
day-to-day work decisions will be made autonomously through agentic AI by 2028
9:15
up from essentially 0 in 2024. That is in a distant forecast. It's a projection
9:21
from a baseline that is already moving inside most mid to large enterprises
9:25
today. Now the innovation and market shifts lens. What is the market building
9:31
and how fast is it moving? The market knows exactly what it's building. It's not
9:36
naming it directly, but the architecture is unmistakable. Gartner predicts that
9:41
40% of enterprise applications will include integrated task-specific AI agents
9:46
by the end of 2026 up from less than 5% today. Not AI assistants that help people
9:53
do their jobs. Agents that do the job within defined parameters without waiting
9:58
for a human to initiate each step. By 2035, Gartner's best case scenario has
10:04
agentic AI driving approximately $450 billion in enterprise software revenue,
10:09
roughly 30% of the entire market. Notice what the market is selling, task
10:15
specific, not workflow replacing, not roll eliminating, task specific. One task at
10:24
the time, each one rationalized locally, each one with a discrete budget line and an
10:28
ROI model. The cumulative effect, a workflow that no longer requires human
10:34
participation, isn't what's being sold. It's what's being assembled. This is where
10:39
the business opportunity gets genuinely interesting and where the strategic gap
10:44
between leading and lagging organizations is opening up. The companies deploying
10:48
these tools aggressively are not doing so because they ran an experiment. They're
10:53
doing it because the economics are compelling in ways that compound over time.
10:58
Recruiter flow data shows recruiters saving six or more hours per week, a 33%
11:03
productivity increase per person. Legal on users report reducing outside
11:08
council dependency by thousands of dollars per contract cycle.
11:12
Individually, those numbers are meaningful. Across a workforce, across a fiscal
11:17
year, they represent a structural cost advantage that competitors without these
11:21
tools cannot match. But the more consequential shift isn't cost reduction. It's
11:27
speed and scale. A hiring process that moves at machine speed doesn't just save
11:33
money. It changes who gets the best candidates. A legal team that can review,
11:38
red line, and execute contracts in minutes rather than days doesn't just
11:43
reduce billable hours. It changes how fast the business can move on deals. The
11:49
organizations that figured this out early are already operating at a different
11:53
tempo than those still treating AI as a pilot program. Forester projects that
11:58
distributed AI workflows will capture 45% of enterprise workload capacity by
12:03
2026. That's not adoption at the margin. That's a structural shift in how
12:08
work gets done. The complication, and it's a real one, is that the vendor market is
12:14
significantly ahead of organizational readiness for what these tools actually
12:18
do. Gartner estimates that only around 130 of the thousands of companies now
12:23
claiming to offer agenteic AI are delivering genuine agenteic capability. The
12:29
rest are rebranding existing automation and RPA under a new label. A genuine
12:35
agenteic system reasons across tasks, adjusts based on outcomes, and handles
12:40
exceptions without a human rewriting the playbook. A rebranded chatbot executes
12:45
a fixed sequence and breaks at the edge case. Buying the ladder under the
12:49
belief it's the former is how organizations end up with expensive tools that
12:53
create new workflow fragility instead of removing old bottlenecks. The
12:58
cyber security sector is already several steps ahead of most enterprise
13:02
functions on this curve, and the outcomes are real, not experimental. As I
13:08
wrote in the first lens for article, the 72 minute gap, organizations deploying
13:13
agenteic SOC automation are realizing documented measurable budget savings. Dropzone
13:20
AI's Edward Wu described it plainly when we spoke at Black Hat USA 2025. At
13:26
roughly $36,000 per year, their platform ran 4,000 automated alert
13:32
investigations. A number that simply cannot be staffed at comparable cost.
13:37
Subo Guha of Stellar Cyber in two separate conversations with me at RSAC 2025
13:45
and Black Hat 2025 described their digital army of AI agents filtering 70 to
13:51
80% of incoming alerts, allowing analysts to focus on the fraction that
13:55
require human judgment. Both companies are emphatic that the value isn't
14:00
hypothetical. The savings are already in the operating budget. The market is
14:06
also generating the next layer of infrastructure, which is itself a leading
14:10
indicator of how far adoption has already gone. When AI agent identity
14:15
governance becomes a funded product category, and it has, it means organizations
14:20
have already deployed enough autonomous agents into production that they've
14:23
discovered they can't see what those agents are doing or control what systems
14:27
they can reach. Token security, named a finalist in the RSAC 2026 innovation
14:34
sandbox, was built entirely around this problem. Governing AI agent identities
14:39
with the same rigor applied to human users. Continuous discovery intent to
14:45
wear access controls, life cycle management from deployment through decommissioning.
14:49
Moderna has already scaled from 750 to more than 3,000 internal AI agents in a
14:56
single year. The governance market doesn't emerge until the adoption that
15:00
requires governance is already underway. That tells you where the actual
15:04
baseline is. Here is the structural shift worth watching closely. Right now,
15:09
organizations are assembling workflows task by task through separate vendor
15:14
decisions. The next phase of the market eliminates that friction entirely, and the
15:20
infrastructure for it is already being built. What's emerging is the
15:24
agentic orchestration platform. A single governed environment where workflows
15:29
can be defined in plain language, purpose-built agents can be selected,
15:33
configured, guard railed, and monitored, and the cumulative workflow is visible
15:38
as a designed hole rather than discovered after the fact as an accumulated
15:43
pile of vendor contracts. NINTEX, which serves more than 7,000 organizations
15:49
across 100 countries, announced its agentic business orchestration platform in
15:53
September 2025, explicitly positioning it as a single governed layer,
15:58
unifying legacy systems, manual processes, and AI agents. Their incoming agent
16:05
designer feature enables IT leaders and business technologists to build,
16:09
evaluate, and orchestrate specialized agents in a low-code environment
16:14
without writing code and without leaving the governance framework. IDCs
16:19
Maureen Fleming framed it directly. Agentic business orchestration represents a
16:24
shift toward coordinating people, systems, and AI agents in governed ways that
16:29
ensure automation and AI deliver measurable results at scale. Microsoft is
16:34
moving in the same direction at enterprise scale. Co-Pilot Studio, already
16:40
connected to more than 1,400 systems, allows agents to be built in natural
16:44
language, configured, monitored, and governed from a single interface. Every
16:50
agent now gets a Microsoft Entra Agent ID, an identity credential that enables
16:55
governance across the fleet. Microsoft's own framing for 2026 is pointed. The
17:01
transition is from AI that helps people do work faster to AI that handles work
17:06
on behalf of the organization, with humans escalating into exceptions rather
17:10
than executing by default. The pattern is the same whether you're watching
17:15
NINTEX, Microsoft, Salesforce, Agent Force, or ServiceNow's Agentic
17:20
capabilities. The market is converging on a platform model where the workflow
17:25
is defined up front in plain language. Agents are scoped to specific tasks with
17:30
explicit permissions. Guard rails are set before deployment rather than bolted
17:35
on after. Human oversight points are designed in rather than assumed, and the
17:40
full workflow is auditable and measurable as a system. The organizations that get
17:45
ahead of this transition will enter it with clear workflow maps and defined
17:49
accountability structures. The ones that don't will find themselves importing
17:54
their accumulated default choices into the new architecture and inheriting all
17:58
the governance gaps that came with them. Now the language and messaging lens.
18:03
Why does everyone say augment when the direction is replaced? Because augment
18:08
gets funded, replace gets scrutinized, and the actual outcome is somewhere
18:13
neither word honestly describes. There is a phrase that appears in virtually
18:17
every vendor pitch, analyst briefing, and enterprise communication about AI
18:22
and automation. We augment human capabilities. We don't replace them. It surfaces
18:28
in hiring tech. It surfaces in legal AI. It surfaces in financial
18:34
automation and customer service platforms. It is, at this point, essentially
18:39
obligatory in the category. The phrase is doing real work. It manages three
18:44
audiences simultaneously. Employees watching their job functions shift,
18:49
procurement committees answering to boards who want to see AI investment
18:52
justified, and regulators watching closely how AI is being deployed in
18:57
consequential decisions. Augment, not replace, threads all three needles
19:02
cleanly. It implies human oversight remains intact. Accountability structures
19:07
are unchanged, and the organization is being measured and responsible. But
19:13
walk the data back against that framing, and it doesn't hold up. Swim lane
19:17
projects AI will resolve or escalate over 90% of Tier 1 security alerts by
19:22
2026. Not assist with them. Resolve them. Gardner projects autonomous AI
19:29
handling 80% of customer service issues without human involvement by 2029.
19:35
The legal contract review tools marketing 75 to 85% time reduction aren't
19:40
augmenting lawyers. They're doing the task and asking the lawyer to review the
19:45
output. The hiring platforms aren't helping recruiters screen faster. They're
19:50
screening and asking the recruiter to validate the ranking. When the AI
19:54
handles 80% of the task and the human handles exceptions after the fact,
19:59
that's not augmentation in any meaningful operational sense. That's
20:04
oversight of an autonomous system. The distinction has direct implications
20:09
for where accountability lives, what skills the organization needs to
20:12
maintain, and what happens when the output is wrong. I explored a version of
20:17
this tension from an unexpected angle on the music evolves podcast in a
20:21
conversation with Chandler Lawn, AI innovation and law fellow at the
20:26
University of Texas School of Law, Drew Thurlow, adjunct professor at Berkeley
20:30
College of Music, and Puyia Parto-Navid, partner at Sayfarth Shaw. We were
20:36
talking about AI-generated music and who owns the output, but the
20:40
underlying question was the same one running through every enterprise workflow.
20:44
When the system produces the thing that used to require a human, what does
20:48
the human's role actually become? The music industry is a few years ahead of most
20:54
enterprise functions on this curve. Universal music group and Warner music
20:58
group both reached landmark settlements with AI music platforms in late
21:03
2025. The answers they're landing on involve drawing explicit lines around
21:08
what requires human creative judgment and what can be systematically produced.
21:12
Enterprise operations will need to draw the same kinds of lines, probably with
21:17
less drama, but with the same underlying logic. The language gap has a practical
21:23
consequence beyond messaging. When leadership describes every AI deployment
21:28
as augmentation, it becomes difficult to have honest internal conversations
21:32
about what the organization has actually delegated, where the accountability
21:36
gaps are, and what happens when a consequential decision turns out to be
21:40
wrong. That conversation is easier to have before the workflow is fully
21:45
assembled than after. Gartner's prediction that over 40% of
21:49
authentic AI projects will be canceled by end of 2027 is worth reading through
21:54
this lens. It's not because the technology fails. It's because organizations
21:59
bought capability without building the governance, accountability structures,
22:03
and organizational clarity to run it responsibly. The language that got the
22:08
tool funded made those harder conversations easier to avoid at purchase time.
22:12
They don't stay avoided. At Black Hat USA 2025, Marco Chapelli and I talked
22:19
after walking the floor about exactly this. When every vendor claims the same
22:23
positioning, the actual distinctions disappear from the buyer's view. In our post
22:29
show episode, we called it the marketing milkshake problem. Every vendor's
22:33
message going into the same promotional blender and coming out tasting the
22:37
same, regardless of what the underlying technology actually does. The agent
22:42
washing problem isn't just a market integrity issue. It's a decision quality
22:46
issue for every organization trying to figure out what they're actually
22:50
acquiring and what decision authority they're actually transferring. And now the
22:55
fourth lens. When did you decide to hand over control? And who was in the room
22:59
when you did? Here is what I keep coming back to when I look at all three lenses
23:04
together. And it's the thing I find myself saying in conversations that rarely
23:08
makes it into polished conference presentations. We are already past the point
23:13
of no return. The human optional workflow is not the exception being cautiously
23:17
piloted. It is the operational default for hiring, contracting, finance, customer
23:24
service, and security operations and organizations that made five individually
23:28
rational procurement decisions and never looked at what those decisions
23:31
assembled. That's not naivety. I want to be clear about that. The organizations
23:38
deploying these tools are not confused about what they're buying. What they
23:42
haven't done and what the vendors selling to them have never required them to do
23:46
is map the cumulative shape of those decisions before committing to them. And I
23:51
don't think that's an accident. When I look at the language the vendor market has
23:55
built around this transition, augment, not replace, human in the loop, AI
24:00
assisted, I don't read it as imprecise. I read it as precise in exactly the
24:05
right direction. These are companies staffed with product managers, lawyers and
24:10
communications teams who understand exactly what their tools do when deployed at
24:14
scale. Augment, not replace, threads every needle it needs to thread, employee
24:20
relations, procurement approval, regulatory scrutiny, board optics. It's not a
24:25
description. It's a strategy and it has worked because organizations bought the
24:30
framing along with the capability and now have workflows they couldn't honestly
24:34
describe as augmented with a straight face. So where does accountability land
24:40
when an AI assembled workflow produces a bad outcome? Right now, nowhere. That is
24:46
not hyperbole. The procurement signer approved a task specific tool with its own
24:51
contained ROI case. The vendor sold a product that performs as specified. The
24:58
workflow that those tools assembled collectively, that's in a gap between
25:02
contracts between org chart lines, between the legal definitions anyone drafted
25:06
when they wrote the terms of service. Nobody owns the workflow. Everybody owns a
25:11
task. That should be alarming. Not because bad outcomes are inevitable, but
25:17
because the accountability structure that would catch and correct a bad outcome
25:20
before it becomes a crisis. That structure doesn't exist yet in most
25:24
organizations. The efficiency gains are real and already in the budget. The
25:30
accountability architecture is still theoretical. What I'm watching closely is
25:34
whether the agentic orchestration platform changes this dynamic or accelerates
25:39
it. My honest read, both depending on the organization. A small group of mature
25:45
deliberate organizations, the ones who were already doing workflow mapping
25:49
before procurement, who already had security and legal at the design table, will
25:55
use NINTEX, co-pilot studio and platforms like them to do exactly what those
25:59
platforms were designed to enable. Define the workflow first, configure the
26:04
agents inside it, set the guard rails before deployment and maintain a complete
26:09
audit trail of what was delegated and why. For those organizations, the platform
26:15
genuinely forces the design conversation because you cannot configure guard
26:19
rails without deciding what you're guarding. For everyone else, the platform will
26:23
make accumulation faster and cheaper. The same five irrational decisions, each
26:28
locally rational, collectively unexamined, will just be easier to make in one
26:33
place. Here's the structural reality I think most organizations are not yet
26:38
reckoning with. The auditors haven't arrived yet. The regulatory frameworks that
26:43
will eventually require organizations to account for autonomous workflow
26:47
decisions, who authorize them under what criteria, with what human oversight and
26:52
how exceptions are handled, those frameworks are being drafted right now. GDPR
26:58
took years to land on AI. The EU AI Act is already in motion. The US regulatory
27:05
posture is slower but not absent. The window between we accumulated this
27:10
workflow through procurement, and we need to demonstrate we designed it with
27:15
intention is open, but it is not going to stay open. The organizations that use
27:20
that window to map what they've built, establish where accountability sits, and
27:24
make explicit decisions about what requires human judgment, not because the AI
27:29
can't do it, but because the organization has determined that accountability
27:33
requires a person will be positioned to operate without disruption when the
27:37
frameworks arrive. The ones that don't will discover that the workflow they
27:42
built by default is not the workflow they would have chosen under scrutiny. The
27:46
vendors knew what they were building. The buyers in most cases didn't ask the
27:51
right questions. The auditors haven't arrived yet. That window is closing. If this
27:56
analysis is useful to you, the full article with all references, data points, and
28:01
links to every podcast conversation mentioned is at seanmarton.com. Search for
28:06
lens4, or find me directly at seanmarton.com. And if these are the kinds of
28:11
conversations you want more of, the redefining cyber security podcast is where I
28:16
explore them in depth every week with the people building, buying, and breaking
28:21
these systems. You can find it wherever you listen to podcasts or at redefining
28:25
cyber security podcast dot com. Thanks for listening.