Loading...
Loading...

New surveys from PwC, Workday, and Section are being read as evidence that AI is overhyped, but the real story is simpler: companies that deeply integrate AI into core workflows are nearly three times more likely to see real financial gains, while everyone else stalls. This is not a story about AI capability—it’s a story about leadership, integration, and execution. In the headlines: Apple explores a new AI device form factor, Meta previews internally trained models, and Congress moves to tighten oversight of advanced chip exports.
Brought to you by:
KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcasts
Zencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflow
Optimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybrief
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
Section - Build an AI workforce at scale - https://www.sectionai.com/
LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, a set of new studies that show the widening gap between enterprise
AI leaders and enterprise AI lagers, and before that in the headlines, Apple is reportedly
developing an AI wearable pin.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, Optimizedly, Zencoder, Assembly, and Super
Intelligent.
To get an ad free version of the show, go to patreon.com, such AI Daily Brief, or you can of course
subscribe on Apple Podcasts.
In either case ad free is just going to be $3 a month, and if you are interested in sponsoring
the show, send us a note at sponsors at AIDailyBreathe.ai.
Welcome back to the AI Daily Brief headlines edition, all the daily AI news you need in around
5 minutes.
A couple of years ago, when Humane announced their AI pin, no one could mistake.
These self-conscious references to Apple all over that company.
Some of the founders were XApple, the aesthetic was very jobsy and the design of the device
was clearly striving to hit some of that simplicity.
Now we all know how that story ended, with a bang not a whimper, as YouTube reviewer
Marquez Brownlee called it the worst product he ever reviewed.
Apparently Apple have now decided that they want a bite at the Apple as it were.
The information reports that Apple's new AI wearable pin will contain a pair of cameras
and three microphones.
The design is described as a thin, flat, circular device, with an aluminum and glass shell, around
the same size as an air tag only slightly thicker.
The information noted that it isn't clear whether this is an individual device or something
designed to be bundled with smart glasses or other devices.
The report states that Apple may attempt to accelerate development of the product to
compete with the OpenAI device, according to the information the pin could be released
next year with the production run of 20 million units at launch.
The takes weren't great.
Showing these skepticism that is brewed around Apple's AI strategy over the last couple
of years.
Naveen on X-Rights, Apple developing a dedicated AI wearable is an admission of failure.
They already own the two best wearables on Earth, the watch and AirPods.
If they need a new plastic bubble to make AI useful, it means they can't make Siri work
on the devices we already own.
Prediction, it will be a $300 accessory that still requires an iPhone to function.
Akash Gupta compared them to Meta and said Apple just told you they're two years behind
the one form factor that actually works.
Meta shipped 4 million AI glasses in 2025 and owns 80% of the market.
Sales tripled year over year.
The Rayband display version sold out in 48 hours.
Meanwhile Apple is prototyping a pin.
The last company that tried this was Humane.
They raised 240 million, launched at $700 got called the worst product I've ever reviewed
and sold to HP for 116 million less than a year later.
Apple watched all of this happen and decided to build the same thing.
Now Akash argues the form factor war is over in Glasses 1.
Apple already wear glasses no behavior change required.
But I think Naveen is also right when he says that the two best wearables on Earth are
the watch and AirPods.
As I've said numerous times on this show before, I think AirPods in particular have a potentially
unique role to play.
But then again, I'm also a boomer who isn't fully on board with everyone seeming critiques
of the terrors of the phone.
So who knows.
Now speaking of Siri, according to Bloomberg's Apple insider Mark Gurman, the company is
planning to turn Siri into a chat GBT style chatbot.
It's Gurman.
The chatbot, codenamed Kampos, will be embedded deeply into the iPhone, iPad and Mac operating
systems and replace the current Siri interface.
Users will be able to summon the new service the same way they open Siri now.
By speaking the Siri command or holding down the side button of their iPhone or iPad, Siri
will accept both speech and text inputs, mimicking the user experience of rival chatbots.
Now on the one hand, this feels completely inevitable and yet at the same time, it is
something of a notable pivot.
One of the outgoing AI leaders Craig Federighi had been adamant that he didn't want Siri
to be a chatbot.
Gurman also reports that Siri will be driven by a custom version of Gemini under Apple's
new partnership with Google.
His sources said that the custom build will allow Siri to, quote, significantly surpass
its personalization features.
That includes integration with Apple's core apps and the ability to use open windows
and on-screen data as inputs.
Apple intends for Siri to have the ability to control the device, including accessing
the file system, placing phone calls and using the camera.
The headline suggests that this was a move to fend off open AI.
All of which brings up to me the sort of obvious thing that perhaps, rather than thinking
of this new superpowered Siri solely as a competitor to chat GPT, the better reference point,
given the deep integration with the operating system might be Claude Code.
Still, whatever it ends up being, there are so many people who just want Siri to actually
be able to do what it seems like it should have been able to do for the last five years,
that I think that when it comes, people will suspend their skepticism for a little while
just to be able to keep using the devices they already own.
Now moving over to another company that's got a lot to prove in 2026, new model training
has apparently been achieved internally at Meta as their new AI team ships a preview.
At a press briefing in Davos, CTO, Andrew Bosworth said the superintelligence team delivered
their first AI models earlier this month for internal testing.
Bosworth said they're basically six months into the work and that the models are very
good.
Now back in December, we learned that Meta was developing two models, Avocado, a language
model that reportedly excels at coding tasks, and Mango, a visual model with image and
video capabilities.
Bosworth didn't identify if these were the models delivered, but did comment on how much
work had happened since the summer.
He said there's a tremendous amount of work to do post training to actually deliver
the model in a way that's usable internally and by consumers.
Overall Bosworth said that Meta felt like they were seeing returns from the big moves
that were made in 2025.
Bosworth acknowledged that it was a tremendous chaotic year, but remember Google had one
of those in 2024 and we've seen how that worked out.
Consumer AI certainly seems to be Bosworth's north star for Meta's product.
In a discussion of the AI bubble at Davos, he said, I think consumers and societies are
ultimately the beneficiaries of this tremendous land grab of power, data centers and GPU capacity.
Speaking of GPUs, one of my predictions for this year was Congress potentially trying
to rest control back from the White House when it came to chip export policy and that
certainly seems to be happening.
The House Oversight Committee has advanced a bill to seize power on chip export controls.
On Wednesday, the committee voted overwhelmingly in favor of the AI Overwatch Act.
The bill would grant Congress the power to review and block chip export licenses granted
by the Commerce Department.
That power would be vested in both the House Foreign Affairs Committee and the Senate
Banking Committee, giving both chambers veto over advanced chip exports.
Essentially, the power mimics congressional oversight for arms deals.
The bill also includes a two-year ban on the export of NVIDIA's top-of-the-line black
wild chips, which of course haven't been considered as part of recent changes.
In a bipartisan vote, the bill gathered 42 votes in favor with only two opposed.
It will still need approval in the Senate Banking Committee and a full vote across both
chambers, but it seems like it has strong momentum for both parties.
Whether the president will sign a bill that limits his own power is another question.
It's beyond the scope of this show, but there is a lot of interesting intrigue when it
comes to divisions in the GOP around this.
Republican Brian Mass, the chief sponsor of the bill, commented,
�If we were just talking about war games on Xbox, then Jensen Huang could sell as many
chips as he wants to anybody that he wants.
But this is not about kids playing Halo on their television.
This is about the future of military warfare.
I believe that we all agree that we are in an AI arms race.
So why wouldn't we want to know what the AI arms dealers want to sell to our adversaries?
Lastly, today, a quick update or signal story that we've been following for the past week
or so.
Open AI has announced a leadership shake-up around returning staff or barret's off.
According to the information, CEO of Applications, Fiji Simo, has announced that Zof will now
lead the Enterprise Division.
COO, Brad Lightcap will hand over responsibility for product and engineering for the enterprise
to focus on what they call commercial functions.
In a separate move, CTO of Applications and former meta-engineering lead, Vijay Raji,
will lead Open AI's advertising push.
Simo said the moves were designed to bring research, product and engineering teams into
better alignment.
Zof was, of course, the locus of controversy earlier this month after his shock departure
from Thinking Machines Lab, where he was listed as one of the co-founders.
The story devolved into, he said, she said, coverage as sources speculated on the true reason
for his departure.
Ultimately, though, more interesting than that is the fact that Open AI has decided to name
him the head of enterprise when that is a very important and contested area.
And indeed, the area of focus for our main episode.
Most marketing teams aren't short on ideas, but what they are short on is time, and that's
exactly what optimizely Opal gives you back, with AI agents that handle real marketing
workflows, you know, like creating content and checking compliance, generating experiment
variations, personalizing user experiences, analyzing pages for GEO, even tasks like approvals
and reporting.
It's your AI agent orchestration platform for marketing and digital teams, plugging seamlessly
into the tools you already use, handling the boring busy work and keeping everything on
brand.
That leaves you marketers with more time to do your actual job.
See what Opal can automate for your team by signing up for a free enterprise agentic
AI workshop with optimizely.
Learn more at optimizely.com slash the AI Daily Brief.
If you're using AI to code, ask yourself, are you building software or are you just playing
prompt roulette?
We know that unstructured prompting works at first, but eventually it leads to AI slop
and technical debt.
Enter ZenFlow.
ZenFlow takes you from vibe coding to AI first engineering.
It's the first AI orchestration layer that brings discipline to the chaos.
It transforms freeform prompting into spec-driven workflows and multi-agent verification, where
agents actually cross-check each other to prevent drift.
You can even command a fleet of parallel agents to implement features and fix bugs simultaneously.
If scene teams accelerate delivery to x to 10x, stop gambling with prompts, start orchestrating
your AI, turn raw speed into reliable production grade output at ZenFlow.free.
If you're building anything with voice AI, you need to know about assembly AI.
They've built the best speech to text and speech understanding models in the industry,
the quiet infrastructure behind products like granola, dovetail, ashbee, and cluelie.
Now, as I've said before, voice is one of the most important modalities of AI.
It's the most natural human interface, and I think it's a key part of where the next
wave of innovation is going to happen.
Assembly AI's models lead the field in accuracy and quality so you can actually trust the
data your product is built on.
And their speech understanding models help you go beyond transcription, uncovering insights,
identifying speakers, and surfacing key moments automatically.
Its developer first, no contracts, pay only for what you use and scales effortlessly.
Go to assembly AI.com, slash brief, grab $50 in free credits, and start building your
voice AI product today.
Today's episode is brought to you by Super Intelligent.
Super Intelligent is a platform that very simply put is all about helping your company
figure out how to use AI better.
We deploy voice agents to interview people across your company, combine that with proprietary
intelligence about what's working for other companies, and give you a set of recommendations
around use cases, change management initiatives, that add up to an AI roadmap that can help
you get value out of AI for your company.
But now we want to empower the folks inside your team who are responsible for that transformation
with an even more direct platform.
Our forthcoming AI strategy Compass tool is ready to start to be tested.
This is a power tool for anyone who is responsible for AI adoption or AI transformation inside
their companies.
It's going to allow you to do a lot of the things that we do at Super Intelligent, but in
a much more automated self-managed way and with a totally different cost structure.
If you are interested in checking it out, go to AIDailyBrief.ai slash Compass, fill out
the form and we will be in touch soon.
Welcome back to the AIDailyBrief.
Today we are talking about a trio of new surveys that help tell the contemporary story of
Enterprise AI as it is deployed.
The surveys come from PWC, Workday, and AI Support and Training Consultancy section, and
part of the reason that I wanted to do this episode is not only that there is rich interesting
information in that data, but that the initial reporting around it has a very distinct slant,
which while I don't think is wrong, I do believe is misleading in a way that could be dangerous.
In short, the mainstream reporting around these surveys leads to a suggestion of AI under
performance.
It contributes to a sensibility that AI is overhyped.
The story that I think the data is actually telling is yet more evidence of the widening
gap between leaders and lagers when it comes to AI adoption.
And implications of those two stories are very, very different.
So let's talk about how the media, specifically the Wall Street Journal summed up the story
with their piece, CEOs say AI is making work more efficient, employees tell a different story.
And again, to be clear, none of the data that the Wall Street Journal here is focusing
on is incorrect or even insignificant.
Their first graph shows a statistic from the section survey, who for full disclosure are
a sponsor of the show right now, but who actually didn't even share the survey with me, I only
found it when I saw this Wall Street Journal article.
In any case, that survey was conducted around the same time as our AI ROI survey for AIDB
and surveyed 5,000 white collar workers from companies with a thousand people or more
in the US, UK and Canada, concentrated around October of last year.
When asked how much time they think they personally are saving each week by using AI, the
C-suite was saving a ton of time, 33% were saving four to eight hours, a quarter were
saving eight to 12 hours, and almost a fifth were saving more than 12 hours a week.
Meanwhile, among workers, only 2% were saving more than 12 hours per week, more than a quarter
were saving less than two hours, and the largest category by far 40% said they were saving
no time.
A representative quote comes from North Carolina user experience designer Steve McGarvey,
who said, executives automatically assume AI is going to be the savior.
I can't count the number of times that I've sought a solution for a problem asked to
an LLM, and it gave me a solution to an accessibility problem that was completely wrong, which
brings us to the workday research.
The headline or stat from that survey was that 37% of the time saved through AI is being
set by rework.
In the executive summary they write, employees report spending significant time correcting,
clarifying or rewriting low quality AI generated content, essentially creating an AI tax on productivity.
For every 10 hours of efficiency gained through AI, nearly 4 hours are lost to fixing its output.
In other words, they say one and a half weeks a year is being lost to fixing AI outputs
per highly engaged employee.
Another area of divergence between workers and executives had to do with anxiety around
AI.
For workers, the percentage who said they were anxious or overwhelmed versus excited, was
nearly a 70-30 split to the side of anxious.
On the C-suite, the split went the other way, with more than 70% being excited, while
less than 30% were anxious or overwhelmed.
All of this leads to what can only be described as underperformance in terms of actual financial
impact.
They highlighted that in a new PWC survey of CEOs that was released to coincide with
the WEF in Davos this week, just 12% of CEOs said AI had delivered both cost and revenue,
while 56% more than half of the nearly 4,500 CEOs pulled said they had seen no significant
financial benefit so far.
So like I said at the beginning, none of this is incorrect information, and all of it
is interesting and important signal.
My concern is that the way that it's being presented contributes to a sensibility that
AI itself is underperforming and that AI itself is overhyped.
The reason that matters is that it has the potential of changing the way that individuals
and companies think about AI adoption.
Put simply, some number of people are going to see this, and feel like it perhaps takes
them off the hook a bit.
That in other words, they were right to be skeptical, and that maybe they don't have
to figure out where to carve out the time to learn how to use these new tools, because
they're not all that good anyway.
On an individual to say nothing of a company level, this is not a winning strategy for
adapting to the new world that has in fact already arrived.
As a set of the beginning, my interpretation is a little bit different.
I think that all of these studies are actually adding up to a story of a widening gap between
leaders and lagers.
Let's hone in and focus on the Vanguard of companies.
Those 12% right here who are seeing both an increase in revenue and a decrease in cost
from AI, rather than focusing on this 56% number of CEOs who haven't seen a change in
either revenue or cost, what makes these 12% different?
Are they doing things that is different?
The short answer is absolutely yes.
Those Vanguard companies, that top 12% who are actually seeing double financial gains in
terms of increased revenue and reduced cost, are 2.6 times more likely to have embedded
AI into their core processes.
44% of those companies say that they are deploying AI to a large extent versus just 17%,
PWC rights, foundations matter as much as scale.
CEOs whose organizations have established strong AI foundations, such as responsible AI
frameworks and technology environments that enable enterprise-wide integration are
three times more likely to report meaningful financial returns.
In other words, deeply integrating AI triples your likelihood of positive outcomes.
This might be old hat for many of you who are here, but the glaring point that stands
out from that is that this is a story of enterprise environment more than a story of AI capability.
Now, let's take a look at the section report, which is explicitly about AI proficiency.
And once again, the headline stat here is not encouraging.
By sections metrics, just 3% of employees are using AI proficiently, as opposed to 97%
who are either AI novices or AI experimenters.
40% said that they'd be fine never using AI again.
But the story that actually emerges is that employees aren't being given the tools to succeed.
85% of the knowledge workers from section survey either had no work-related AI use cases
or beginner level.
59% of the reported AI use cases were basic task assistants, things like replacing Google
search, drafting, editing, and summarizing documents.
Only 2% of respondents have built any sort of automation, and only 3% of respondents
said their most valuable use case was data analysis or code generation.
Indeed, only 2% overall of use cases were judged to be advanced.
In short, companies are not giving their employees tools to go beyond the most basic
of use cases.
They are instead dropping LLMs on top of their heads.
In many cases, based on enterprise deployments, LLMs that are a generation or two behind,
and telling them to make it work.
And we do see organizations that are providing their employees tools having more proficient
employees.
Compared to a baseline employee across the whole study, employees that have access to tools
have 1.5 times the proficiency of the baseline, which is the same as the multiplier for employees
who report having a coherent company AI strategy, the multiplier is 1.6.
And by far the biggest multiplier comes from employees whose managers explicitly expect
AI usage, who are 2.6 times more AI proficient than the baseline employee in the study.
Leadership expectation is the strongest catalyst by signaling that AI is not core work, and
although it's not captured in the study, my strong guess is providing time for people
to actually learn and experiment with the tools outside of the bounds of their normal
work, rather than expecting them to just figure out when to go do that exploration for
themselves leads to nearly 3x more proficiency than the baseline.
Now one of the things that shows up in the section report is also just this catastrophic
divergence in the perception between the C-suite and individual contributors.
This reminds me of a study from writer back in December of 2024 that found among other
things a 30 point gap between the number of C-suite executives who said that their company
had a coherent AI strategy and the percentage of employees who thought so.
That gap is actually even larger in this study.
81% of C-suite officers surveyed said that their company had clear AI policy compared
to just 28% of individual contributors, a 53 point gap.
The encouragement to experiment was 51% for C-suite versus just 20% for employees.
Tool access was 80% for the C-suite versus 32% for employees.
Training received was the widest divergence, with 81% of the C-suite reporting that they
had been trained, and only 27% of individual contributors.
What this means is that these challenges will not self-correct.
The C-suite by and large, at least from these studies, is not recognizing the problem.
And on the same theme of the problem not fixing itself, we see reinvestment failure becoming
a major bottleneck.
This comes from that workday study, which found that nearly 40% of AI time savings are
lost to fixing AI output.
When asked about how they reinvest their savings from AI into the organization, 39% goes
into tech infrastructure versus just 30% into workforce development.
And importantly, the numbers are even more dramatic for time savings allocation.
53% of reinvestment of the time saved goes into systems versus just 29% for people
in workforce development.
And to be clear, this doesn't appear to be a strategic determination that investment
into systems is better than investment into people.
59% of leaders say that skills development is their priority, while just 30% of employees
are experiencing that, a 29% point gap.
Zooming out, the workday study starts to dramatize the leader-lagger gap when it comes
to individual employees.
They divide employee personas into four groups, the observers, who stand on the sidelines,
not wasting time fixing, but not generating value either, the misaligned middle, who
are struggling to make tools work, and find that the effort required to clean up output
outweighs the benefits, the low return optimists, who have high AI activity but also high
rework, and the augmented strategists who are seeing the highest net productivity gains.
There are huge differences in the profiles of the augmented strategists compared to everyone
else.
93% of the augmented strategists treat AI as a radar to spot patterns, rather than a
crutch.
71% are experienced professionals, 35 to 44, 57% report that their organizations have
increased investment in team connection, which is way higher than the other categories,
and the augmented strategist are two times as likely to have received substantial skills
training.
By contrast, the low return optimists, who have high enthusiasm but also a high rework
burden, report only 37% increased access to skills training, which is actually the lowest
of any group.
All of this reflects things that we found in our AI ROI benchmarking study as well.
We found there to be statistically significant correlations between the diversity of AI
use cases and how much ROI benefit companies were reporting.
In other words, when companies had AI use cases that were not only for time or cost savings
but also for increasing output, increasing the quality of strategic decision making, unlocking
new capabilities, basically the larger the number of categories of ROI that use cases
led to, the more overall benefit companies were seeing.
We also saw that use cases that focused on strategic outcomes rather than just efficiency
outcomes had higher net reports of ROI benefit.
Ultimately, the story that these surveys tell is that companies who are investing deeply
in putting proper AI foundations into place are seeing two to three times the benefit of
everyone else.
And because of the nature of these tools, those benefits are compounding.
The more proficient with AI you get, the more likely to continue to get further ahead
you are.
The lens then to read these studies through is not AI overhyped, but instead the very
real burden of infrastructure for AI adoption and the significant gains that come from it.
Now I know that many of you listeners are the folks inside your companies who are responsible
for AI strategy and who are advocating for the sort of policies that were very clearly
seeing lead to better outcomes.
Hopefully some of these studies then can provide fodder for you to win more internal arguments.
That is what I wish for you, but for now, that's going to do it for today's AI Daily
Brief.
Appreciate you listening or watching, as always, and until next time, peace.

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis