0:00
RSA C2026 conference, where insights drive action, solutions emerge, and community connections
0:09
Be there March 23rd through March 26th, secure your spot at rsaconference.com forward slash
0:51
The workflows were handing to AI one decision at a time by Sean Martin, lens4 at SeanMartin.com.
1:00
I look at the intersection of business, technology, and messaging regularly through three lenses.
1:06
How organizations operate and run their programs.
1:09
How innovation and market forces are reshaping what's possible and how the language and narrative
1:14
around technology shapes what gets funded, prioritized, and trusted.
1:19
This week, all three lenses are pointing at the same thing, and the picture is clearer
1:24
than most people are comfortable admitting.
1:27
Nobody decided to remove the human from the workflow.
1:29
That's the part worth sitting with.
1:32
In boardrooms, in budget reviews, in vendor evaluations, nobody stood up and said, let's
1:38
build a business process with no humans in the loop.
1:41
What happened instead was a series of smaller decisions, each of them reasonable, each of
1:46
them local, each of them defensible.
1:49
An HR team bought a screening tool to handle application volume, a legal department licensed
1:55
an AI drafting platform to reduce outside counsel spend.
1:59
A finance team deployed automated invoice processing to close faster.
2:04
None of those decisions on their own looks like giving up control, but map them together
2:10
task by task across a single workflow and something significant emerges.
2:16
The human is already optional across most of the process.
2:20
That's what I want to examine this week.
2:22
Not whether AI should take on more of the work.
2:25
That debate is largely settled in the data.
2:27
But whether organizations have consciously mapped what they've actually handed over and
2:32
what that means for how businesses operate, compete, and carry accountability going forward.
2:38
Let's start with the business operations lens.
2:41
Are we delegating efficiently or giving up control?
2:45
The honest answer is both.
2:47
And most organizations can't tell the difference yet.
2:50
Let me trace two workflows that most businesses run every week, not as edge cases, not as experiments,
2:57
but as normal operating processes with deployed tools and real outcomes.
3:02
The first is the hiring workflow, a job requisition opens.
3:07
What happens next used to require a recruiters judgment at every step?
3:12
Here's what the same process looks like with current tools.
3:16
The first task is resume screening.
3:18
Unilever reported saving over 1 million pounds annually after deploying AI screening tools
3:24
across its hiring pipeline.
3:26
McDonald's rolled out paradoxes conversational AI called Olivia across thousands of locations
3:32
to handle applicant screening and scheduling.
3:35
This move from application to interview without a human recruiter touching the file.
3:40
AI tools now rank and filter applicants in seconds against criteria, a human set once
3:46
and that now runs autonomously at scale.
3:49
The second task is interview scheduling, a large US financial services firm using good
3:55
time, reduced time to fill by weeks by automating calendar coordination alone.
4:01
The moment a candidate cleared screening, an interview invite went out within hours.
4:06
No recruiter coordination required.
4:08
The third task is first round interviewing and assessment.
4:11
Higher view used by JP Morgan, Goldman Sachs, Amazon, Microsoft, Emirates Airlines,
4:18
and hundreds of others, conducts asynchronous first round interviews with no human present.
4:24
The candidate records answers to preset questions on their own schedule.
4:28
The AI analyzes speech, language, and behavioral indicators and returns a ranked score.
4:35
No recruiter watches the recording until after the AI has already filtered and ranked the
4:41
Emirates Airlines reduced its hiring cycle from 60 days to 7 using this approach.
4:47
The human interviewer enters at round 2, but by then, the AI has already determined
4:51
who gets that meeting.
4:54
The fourth task is offer generation and outreach.
4:58
Networking platforms including Lindy and recruiter flows agent mode draft, personalize, and
5:04
send offer communications and follow-up sequences autonomously.
5:08
The offer letter is written before a recruiter opens their inbox.
5:12
The fifth task is onboarding initiation.
5:16
End-to-end workflow automation, deployable today through platforms like N8N, covers the
5:22
full pipeline from CV submission through assessment, scheduling, and status tracking,
5:28
without human intervention at any step.
5:32
Five separate vendor decisions.
5:35
Each one made independently, each one with its own ROI story.
5:40
And together, a process where a candidate can move from application to offer without
5:44
a single human making an active decision along the way.
5:48
Now let's run the same analysis across a legal department's standard contracting process.
5:54
The first task is matter intake and triage.
5:57
Quickbox AI handles incoming legal requests through intelligent chatbots that capture context,
6:03
ask clarifying questions, and route matters to the right team automatically.
6:08
No paralegal spending the morning clearing an email queue.
6:12
The second task is legal research.
6:14
Harvey AI, now embedded in AMLaw, 100 firms, surfaces relevant case law, statutes, and
6:21
precedent across large document sets in minutes.
6:25
Alexis Plus AI provides contextual legal reasoning on demand.
6:30
What used to be a junior associate's full day is now a prompt.
6:34
The third task is contract drafting.
6:37
Spellbook drafts contracts inside Microsoft Word.
6:40
Legal on users report NDA reviews dropping from two hours to 30 minutes.
6:46
One managing partner reported a 40% increase in billing capacity, not from doing better
6:50
work, but because AI wrote the first draft on every matter.
6:55
The fourth task is contract review and redlining.
6:58
Luminance identifies anomalies and flags deviations from playbooks across thousands of contracts
7:06
Kira extracts specific clauses at scale.
7:10
Across the category, AI contract review tools are reducing review time by 75 to 85%.
7:18
A task that defined legal practice for decades is compressing toward minutes.
7:23
The fifth task is approval routing and post execution management.
7:28
Contract Pade handles routing, obligation tracking, and compliance monitoring after signature.
7:34
Ironclad manages the full contract life cycle, renewals, explorations, obligation triggers
7:41
without a paralegal maintaining a spreadsheet.
7:43
Again, five tasks, five products, five separate procurement decisions, and end-to-end, a contracting
7:52
workflow where a contract can move from request to executed agreement without a lawyer
7:57
authoring a single original clause.
8:00
And this pattern runs across the business, not just in these two functions.
8:06
AI invoice processing platforms, handle capture, validation, approval routing, and payment
8:12
scheduling end-to-end, with one hospital association reporting a reduction in batch processing
8:17
time from 10 hours to minutes.
8:20
Server service has it.
8:22
Gartner projects that agentic AI will resolve 80% of common customer service issues without
8:28
human intervention by 2029, up from effectively zero in 2024.
8:34
Security operations has it.
8:36
And I've had direct conversations with the vendors building these tools.
8:41
Edward Wu, founder of Dropzone AI, told me ahead of Black Hat USA 2025.
8:47
He wants to be a tier one analyst forever.
8:51
Subo Guha of Stellar Cyber described a digital army of AI agents that filter 70 to 80%
8:56
of alerts before a human analyst sees them.
9:00
The pattern looks the same whether the workflow is closing a contract or closing a security
9:06
The business question this raises isn't whether the tools work, most of them do.
9:10
The question is whether organizations have a clear, deliberate answer to which tasks require
9:16
a human decision and why, because right now many organizations are answering that question
9:21
by default, one purchase at a time rather than by design.
9:26
Gartner puts a number on the trajectory, at least 15% of day-to-day work decisions will
9:32
be made autonomously through agentic AI by 2028, up from essentially zero in 2024.
9:39
That isn't a distant forecast, it's a projection from a baseline that is already moving inside
9:44
most mid to large enterprises today.
9:47
Now the innovation and market shifts lens.
9:50
What is the market building and how fast is it moving?
9:54
The market knows exactly what it's building, it's not naming it directly, but the architecture
10:01
Gartner predicts that 40% of enterprise applications will include integrated task-specific AI agents
10:06
by the end of 2026, up from less than 5% today.
10:11
Agent AI assistants that help people do their jobs, agents that do the job, within defined
10:17
parameters without waiting for a human to initiate each step.
10:21
By 2035, Gartner's best case scenario has a gentic AI driving approximately $450 billion
10:28
in enterprise software revenue, roughly 30% of the entire market.
10:33
Notice what the market is selling, task-specific, not workflow-replacing, not role-eliminating,
10:43
One task at a time, each one rationalized locally, each one with a discrete budget line
10:51
The cumulative effect, a workflow that no longer requires human participation, isn't what's
10:56
being sold, it's what's being assembled.
10:59
This is where the business opportunity gets genuinely interesting, and where the strategic
11:04
gap between leading and lagging organizations is opening up.
11:07
The companies deploying these tools aggressively are not doing so because they ran an experiment.
11:13
They're doing it because the economics are compelling in ways that compound over time.
11:18
Recruiter flow data shows recruiters saving six or more hours per week, a 33% productivity
11:24
increase per person.
11:26
Legal on-users report reducing outside counsel dependency by thousands of dollars per contract
11:32
Individually, those numbers are meaningful.
11:36
Across a workforce, across a fiscal year, they represent a structural cost advantage
11:40
that competitors without these tools cannot match.
11:44
But the more consequential shift isn't cost reduction.
11:47
It's speed and scale.
11:50
A hiring process that moves at machine speed doesn't just save money.
11:54
It changes who gets the best candidates.
11:57
A legal team that can review, red line, and execute contracts in minutes rather than
12:02
days doesn't just reduce billable hours.
12:05
It changes how fast the business can move on deals.
12:09
The organizations that figured this out early are already operating at a different tempo
12:13
than those still treating AI as a pilot program.
12:18
Forester projects that distributed AI workflows will capture 45% of enterprise workload capacity
12:25
It's not adoption at the margin.
12:27
That's a structural shift in how work gets done.
12:30
The complication, and it's a real one, is that the vendor market is significantly ahead
12:35
of organizational readiness for what these tools actually do.
12:39
Gartner estimates that only around 130 of the thousands of companies now claiming to
12:44
offer agenteic AI are delivering genuine agenteic capability.
12:49
The rest are rebranding existing automation and RPA under a new label.
12:54
A genuine agenteic system reasons across tasks, adjusts based on outcomes, and handles
13:00
exceptions without a human rewriting the playbook.
13:03
A rebranded chatbot executes a fixed sequence and breaks at the edge case.
13:08
Buying the latter under the belief it's the former is how organizations end up with
13:12
expensive tools that create new workflow fragility instead of removing old bottlenecks.
13:18
The cybersecurity sector is already several steps ahead of most enterprise functions on
13:23
this curve, and the outcomes are real, not experimental.
13:28
As I wrote in the first lens 4 article, the 72 minute gap, organizations deploying agenteic
13:34
SOC automation are realizing documented, measurable budget savings.
13:40
Dropzone AI's Edward Wu described it plainly when we spoke at Black Hat USA 2025.
13:47
At roughly $36,000 per year, their platform ran 4,000 automated alert investigations.
13:54
A number that simply cannot be staffed at comparable cost.
13:58
Subo Guha of Stellar Cyber in two separate conversations with me at RSAC 2025 and Black
14:06
Hat 2025 described their digital army of AI agents filtering 70 to 80% of incoming alerts,
14:13
allowing analysts to focus on the fraction that require human judgment.
14:18
Both companies are emphatic that the value isn't hypothetical.
14:22
The savings are already in the operating budget.
14:25
The market is also generating the next layer of infrastructure, which is itself a leading
14:30
indicator of how far adoption has already gone.
14:34
When AI agent identity governance becomes a funded product category, and it has, it
14:39
means organizations have already deployed enough autonomous agents into production that
14:44
they've discovered they can't see what those agents are doing, or control what systems
14:49
Token security, named a finalist in the RSAC 2026 Innovation Sandbox, was built entirely
14:56
around this problem, governing AI agent identities with the same rigor applied to human users.
15:03
Continuous discovery, intent to wear access controls, life cycle management from deployment
15:09
through decommissioning.
15:11
Moderna has already scaled from 750 to more than 3,000 internal AI agents in a single year.
15:18
The governance market doesn't emerge until the adoption that requires governance is already
15:23
That tells you where the actual baseline is.
15:26
Here is the structural shift worth watching closely.
15:29
Right now, organizations are assembling workflows tasked by task through separate vendor decisions.
15:36
The next phase of the market eliminates that friction entirely, and the infrastructure
15:41
for it is already being built.
15:44
What's emerging is the agentic orchestration platform.
15:47
A single governed environment where workflows can be defined in plain language, purpose-built
15:52
agents can be selected, configured, guardrailed, and monitored, and the cumulative workflow
15:58
is visible as a designed hole rather than discovered after the fact as an accumulated
16:03
pile of vendor contracts.
16:06
In-text, which serves more than 7,000 organizations across 100 countries, announced its agentic
16:12
business orchestration platform in September 2025, explicitly positioning it as a single
16:18
governed layer, unifying legacy systems, manual processes, and AI agents.
16:24
Their incoming agent designer feature enables IT leaders and business technologists to build,
16:30
evaluate, and orchestrate specialized agents in a low-code environment without writing code
16:35
and without leaving the governance framework.
16:38
IDC's Maureen Fleming framed it directly.
16:42
Agentic business orchestration represents a shift toward coordinating people, systems,
16:47
and AI agents in governed ways that ensure automation and AI deliver measurable results
16:54
Microsoft is moving in the same direction at enterprise scale.
16:58
Co-Pilot Studio, already connected to more than 1,400 systems, allows agents to be built
17:04
in natural language, configured, monitored, and governed from a single interface.
17:10
Every agent now gets a Microsoft Entra agent, ID, an identity credential that enables governance
17:18
Microsoft's own framing for 2026 is pointed.
17:21
The transition is from AI that helps people do work faster to AI that handles work on
17:26
behalf of the organization, with humans escalating into exceptions rather than executing by default.
17:33
The pattern is the same whether you're watching Nintex, Microsoft, Salesforce, Agent Force,
17:39
or ServiceNow's Agentic Capabilities.
17:42
The market is converging on a platform model where the workflow is defined up front
17:46
and plain language.
17:48
Agents are scoped to specific tasks with explicit permissions.
17:52
Guard rails are set before deployment rather than bolted on after.
17:57
Human oversight points are designed in rather than assumed, and the full workflow is audible
18:01
and measurable as a system.
18:04
The organizations that get ahead of this transition will enter it with clear workflow maps and
18:08
define accountability structures.
18:12
The ones that don't will find themselves importing their accumulated default choices into
18:17
the new architecture and inheriting all the governance gaps that came with them.
18:21
Now the language and messaging lens.
18:24
Why does everyone say augment when the direction is replaced?
18:28
Because augment gets funded, replace gets scrutinized, and the actual outcome is somewhere
18:33
neither word honestly describes.
18:36
There is a phrase that appears in virtually every vendor pitch, analyst briefing, and
18:40
enterprise communication about AI and automation.
18:44
We augment human capabilities.
18:46
We don't replace them.
18:48
It surfaces in hiring tech.
18:50
It surfaces in legal AI.
18:53
It surfaces in financial automation and customer service platforms.
18:57
It is, at this point, essentially obligatory in the category.
19:02
The phrase is doing real work.
19:04
It manages three audiences simultaneously.
19:07
Employees watching their job functions shift, procurement committees answering to boards
19:11
who want to see AI investment justified, and regulators watching closely how AI is being
19:16
deployed in consequential decisions.
19:20
Augment, not replace, threads all three needles cleanly.
19:24
It implies human oversight remains intact.
19:27
Accountability structures are unchanged, and the organization is being measured and responsible.
19:33
But walk the data back against that framing, and it doesn't hold up.
19:37
Swimlane projects AI will resolve or escalate over 90% of Tier 1 security alerts by 2026.
19:44
Not assist with them.
19:47
Gardner projects autonomous AI handling 80% of customer service issues without human involvement
19:55
The legal contract review tools marketing 75% to 85% time reduction aren't augmenting
20:02
They're doing the task and asking the lawyer to review the output.
20:06
The hiring platforms aren't helping recruiters screen faster.
20:10
They're screening and asking the recruiter to validate the ranking.
20:14
When the AI handles 80% of the task and the human handles exceptions after the fact, that's
20:20
not augmentation in any meaningful operational sense.
20:24
That's oversight of an autonomous system.
20:27
The distinction has direct implications for where accountability lives, what skills the
20:31
organization needs to maintain, and what happens when the output is wrong.
20:36
I explored a version of this tension from an unexpected angle on the Music Evolves Podcast
20:41
in a conversation with Chandler Lawn, AI Innovation and Law Fellow at the University of Texas
20:47
School of Law, Drew Thurlow, adjunct professor at Berkeley College of Music, and Pooja
20:52
Parto-Navid, partner at Sayfarth Shaw.
20:56
We were talking about AI-generated music and who owns the output, but the underlying
21:01
question was the same one running through every enterprise workflow.
21:05
When the system produces the thing that used to require a human, what does the human's
21:09
role actually become?
21:12
The music industry is a few years ahead of most enterprise functions on this curve.
21:17
Universal Music Group and Warner Music Group both reached landmark settlements with AI
21:21
music platforms in late 2025.
21:25
The answers they're landing on involve drawing explicit lines around what requires human
21:29
creative judgment and what can be systematically produced.
21:34
Enterprise operations will need to draw the same kinds of lines, probably with less drama,
21:38
but with the same underlying logic.
21:41
The language gap has a practical consequence beyond messaging.
21:45
When leadership describes every AI deployment as augmentation, it becomes difficult to
21:50
have honest internal conversations about what the organization has actually delegated,
21:56
where the accountability gaps are, and what happens when a consequential decision turns
22:02
That conversation is easier to have before the workflow is fully assembled than after.
22:07
Gartner's prediction that over 40% of agentic AI projects will be canceled by end of
22:12
2027 is worth reading through this lens.
22:16
It's not because the technology fails.
22:18
It's because organizations bought capability without building the governance, accountability
22:23
structures, and organizational clarity to run it responsibly.
22:27
The language that got the tool funded made those harder conversations easier to avoid
22:33
They don't stay avoided.
22:34
At Black Hat USA 2025, Marco Chipelli and I talked after walking the floor about exactly
22:42
When every vendor claims the same positioning, the actual distinctions disappear from the
22:48
In our post-show episode, we called it the marketing milkshake problem.
22:53
Every vendor's message going into the same promotional blender and coming out tasting
22:57
the same, regardless of what the underlying technology actually does.
23:02
The agent-washing problem isn't just a market integrity issue.
23:05
It's a decision-quality issue for every organization trying to figure out what they're actually
23:10
acquiring and what decision authority they're actually transferring.
23:14
And now, the fourth lens.
23:16
When did you decide to hand over control and who was in the room when you did?
23:21
Here is what I keep coming back to when I look at all three lenses together.
23:25
And it's the thing I find myself saying in conversations that rarely makes it into polished
23:29
conference presentations.
23:31
We are already past the point of no return.
23:34
The human optional workflow is not the exception being cautiously piloted.
23:39
It is the operational default for hiring, contracting, finance, customer service, and security
23:45
operations in organizations that made five individually rational procurement decisions
23:50
and never looked at what those decisions assembled.
23:53
That's not naivety.
23:55
I want to be clear about that.
23:57
The organizations deploying these tools are not confused about what they're buying.
24:02
What they haven't done and what the vendors selling to them have never required them to do
24:07
is map the cumulative shape of those decisions before committing to them.
24:11
And I don't think that's an accident.
24:13
When I look at the language the vendor market has built around this transition, augment,
24:18
not replace, human in the loop, AI assisted, I don't read it as imprecise.
24:24
I read it as precise in exactly the right direction.
24:27
These are companies staffed with product managers, lawyers, and communications teams who understand
24:32
exactly what their tools do when deployed at scale.
24:36
And not replace, threads every needle it needs to thread, employee relations, procurement
24:42
approval, regulatory scrutiny, board optics.
24:45
It's not a description, it's a strategy, and it has worked because organizations bought
24:50
the framing along with the capability and now have workflows they couldn't honestly
24:54
describe as augmented with a straight face.
24:59
So where does accountability land when an AI assembled workflow produces a bad outcome?
25:06
That is not hyperbole.
25:08
The procurement signer approved a task specific tool with its own contained ROI case.
25:14
The vendor sold a product that performs as specified.
25:18
The workflow that those tools assembled collectively, that's in a gap between contracts between
25:23
org chart lines, between the legal definitions anyone drafted when they wrote the terms of service.
25:29
Nobody owns the workflow, everybody owns a task.
25:33
That should be alarming.
25:35
Not because bad outcomes are inevitable, but because the accountability structure that
25:39
would catch and correct a bad outcome before it becomes a crisis.
25:43
That structure doesn't exist yet in most organizations.
25:46
The efficiency gains are real and already in the budget.
25:50
The accountability architecture is still theoretical.
25:53
What I'm watching closely is whether the agentic orchestration platform changes this dynamic
26:00
My honest read, both, depending on the organization.
26:04
A small group of mature deliberate organizations, the ones who were already doing workflow mapping
26:10
before procurement, who already had security and legal at the design table, will use nintex,
26:16
co-pilot studio, and platforms like them to do exactly what those platforms were designed
26:23
Define the workflow first, configure the agents inside it, set the guard rails before deployment,
26:29
and maintain a complete audit trail of what was delegated and why.
26:33
For those organizations, the platform genuinely forces the design conversation, because you
26:38
cannot configure guard rails without deciding what you're guarding.
26:42
For everyone else, the platform will make accumulation faster and cheaper.
26:46
The same five irrational decisions, each locally rational, collectively unexamined, will
26:51
just be easier to make in one place.
26:55
Here's the structural reality I think most organizations are not yet reckoning with.
27:00
The auditors haven't arrived yet.
27:02
The regulatory frameworks that will eventually require organizations to account for autonomous
27:07
workflow decisions, who authorize them under what criteria, with what human oversight, and
27:13
how exceptions are handled, those frameworks are being drafted right now.
27:17
GDPR took years to land on AI.
27:21
The EU AI Act is already in motion.
27:24
The US regulatory posture is slower but not absent.
27:28
The window between we accumulated this workflow through procurement, and we need to demonstrate
27:34
we designed it with intention is open, but it is not going to stay open.
27:39
The organizations that use that window to map what they've built, establish where accountability
27:44
sits, and make explicit decisions about what requires human judgment, not because the
27:49
AI can't do it, but because the organization has determined that accountability requires
27:54
a person will be positioned to operate without disruption when the framework's arrive.
27:59
The ones that don't will discover that the workflow they built by default is not the workflow
28:04
they would have chosen under scrutiny.
28:07
The vendors knew what they were building, the buyers in most cases, didn't ask the right
28:11
questions, the auditors haven't arrived yet.
28:15
That window is closing.
28:17
If this analysis is useful to you, the full article with all references, data points,
28:21
and links to every podcast conversation mentioned, is at seanmarton.com.
28:26
Search for lens4, or find me directly at seanmarton.com.
28:31
And if these are the kinds of conversations you want more of, the redefining cybersecurity
28:35
podcast is where I explore them in depth every week with the people building, buying, and
28:41
breaking these systems.
28:42
You can find it wherever you listen to podcasts, or at redefiningcibersecuritypodcast.com.
28:49
Thanks for listening.