Loading...
Loading...

Claude’s latest updates aren’t just incremental—they fundamentally change how you interact with AI, shifting from tool to always-on execution layer. This episode breaks down the biggest new capabilities across Claude Code and Claude Cowork, including remote control, dispatch, channels, scheduled tasks, and full computer use, with a focus on what they actually enable and how to start using them in practice.
Claude updates checklist: https://play.aidailybrief.ai/episodes/how-to-use-claudes-massive-new-upgrades/
Brought to you by:
KPMG – Agentic AI is powering a potential $3 trillion productivity shift, and KPMG’s new paper, Agentic AI Untangled, gives leaders a clear framework to decide whether to build, buy, or borrow—download it at www.kpmg.us/Navigate
Mercury - Modern banking for business and now personal accounts. Learn more at https://mercury.com/personal-banking
Recall - The API for meeting recording. Get Get started today with $100 in free credits at https://www.recall.ai/aidb
AIUC-1 - Get your agents certified to communicate trust to enterprise buyers - https://www.aiuc-1.com/
Blitzy - Want to accelerate enterprise software development velocity by 5x? https://blitzy.com/
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Our Newsletter is BACK: https://aidailybrief.beehiiv.com/
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, how to use all of Claude's massive new upgrades.
The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI.
Alright friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, recall.ai, assembly, prompt QL and Blitzy.
To get an ad-free version of the show, go to patreon.com slash AI Daily Brief
or you can subscribe it up with podcasts. To learn about sponsoring the show, send us a note at
sponsors at aidelebrief.ai. We are getting very close to full with Q2 spots, so if you're promoting
for example a launch or anything in the near future, it is definitely a good time to reach out.
Now one more note on today's show, yesterday was kind of a big politics and society episode.
Today we're running all the way in the other direction with something a lot more practical.
There have been so many new things that have come out for Claude code and Claude co-work
that was time to go through them all, and it ended up running quite long, so we will end up not
doing the headlines today, we will be back with a normal headlines episode tomorrow.
Two more things to quickly flag before we get into that. First Agent Madness is live,
the round of 64 is going, hundreds of you have voted, and voting will close for this first round
at the end of the day on Thursday, March 26th, so go to agentmadness.ai to check that out.
And finally, I'm noticing that I'm doing a lot more little companion experiences with these
episodes, and so as of today we're launching play.ai daily brief.ai, which is where all of those
fun little experiences live. So for example, today you can find the checklist of things to try with
Claude right now, and again, that'll live at play.ai daily brief.ai. With all that out of the way,
let's talk about all the cool new things you can do with Claude. Every other day or so for the
last month, there has been some part of the headlines where Claude launched some new feature or
upgrade for Claude code or co-worker, some other part of the Claude ecosystem.
And at this point, it has officially gotten to the point where we needed to do a full-on retrospective
of everything that has launched over the last month or so to help you guys map out and figure
out how to use all of these big new upgrades. Now of course, this comes in a specific lineage of
updates. Step one was the models. The Opus 45 GPT-52 generation of models that came about at the end
of last year catapulted us into a new capability era that Opus 46 GPT-53 codex and GPT-54 have
continued. Step two of course was the unlock that OpenClaw represented. It was a harness that
brought with it a whole bunch of concepts and user interaction patterns and behavior sets that made
building your own agents and agent teams all of a sudden more viable and more realistic.
Step three has been the absolute race ever since OpenClaw blew up to bring those types of features
to all the other AI products. This qualification trend has been one of the big themes ever since
OpenClaw launched. Now when OpenClaw's founder Peter Steinberger was hired by OpenAI,
some people jumped in to say that Anthropic had made a big goof by not bringing him over there.
Others response was a little bit closer to, I don't know man, let's let him cook and see what
happens. And certainly you have to think that the people saying let them cook are feeling pretty
vindicated right now. The clawification of Claude, maybe we'll call it the claudification,
kicked off at the end of February with remote control. Remote control was a way to bring
Claude code specifically to your mobile experience. Both the mobile capabilities of OpenClaw,
as well as its ability to bridge between different types of devices, where some of the parts of
that system that people were most excited about, and so it wasn't all that surprising to see
Claude code jump on that first. What's more even outside of OpenClaw, this is such a natural
extension of the product that you got to imagine that this would have happened anyway.
The way that remote control works is that you start your task in your Claude code terminal session,
and then you can pick it up and continue working from your phone. Now there's nothing
happening in the cloud here. Basically when you've started that remote control session on your
machine, Claude is going to continue to run locally that entire time. That gives remote control
the ability to use your full environment, including your file system, MCP servers, tools, and product
configurations, and you can go back and forth interchangeably. In the docs they write,
unlike Claude code on the web, which runs on cloud infrastructure, remote control sessions run
directly on your machine and interact with your local file system. The web and mobile interfaces
are just a window into that local session. There are three ways to start a remote control session.
You can start a dedicated remote control server by navigating to the specific project directory
you want to work on, and running the Claude remote control command. From there, Claude will display
a session URL that you can use to connect from another device. You can also press your spacebar
to get a QR code that you can access from your phone. An interactive session basically means that
you have your option of going back between using terminal as you normally would with Claude code
or using the remote session. So basically the difference with server mode is that on server mode,
you're just using your mobile device to control Claude code, whereas interactive you can go back
and forth. Finally, if you are already in a Claude code session and you want to move to remote,
you can use the slash remote control command or slash RC, and this is going to once again display
either a URL or QR code that you can use to connect from another device. First impressions were
positive. Prominent SoloPronor PeterLevels wrote, Claude remote control is extremely nice,
can edit on macOS or iOS and cloud app on my production server from anywhere. He basically
compared it favorably to an SSH session, which would be another more technically complex way to
log into your local device to control it while on the go. Roman Mirzoyan writes,
Yesterday I fixed two bugs and then released an app update to the app store without touching my
laptop and having a walk for half a day. Now as time has gone on, people started to realize that this
was a bigger shift that they might have originally thought. Gag and Saludia writes, Claude code remote
control just clicked for me. You kick off a task and the terminal, then pick it up from your phone
on a walk. That's not a productivity feature. That's a relationship shift. You stop thinking of it
as a tool you operate and start thinking of it as something you delegate to and check in with.
Different mental model entirely. I think that that's right and I think that most people are still
just slowly coming to that realization because it's one you kind of have to live not just hear about.
Next up, a couple weeks later we got a different way to interact with Claude from afar. This feature
was for Claude co-work and called dispatch. Anthropics Felix Reisberg writes,
we're shipping a new feature in Claude co-work as a research preview that I'm excited about.
Dispatch, one persistent conversation with Claude that runs on your computer, message it from your
phone, come back to finished work. Felix then went on to explain a little bit further. Because it's
co-work he writes, Claude runs code in a sandbox on your machine. Your files stay local, you approve
what Claude touches before it acts. It feels pretty magical to give Claude a mission on my computer
and get occasional updates like creating reports from internal dashboards or finding me a better
seat on my next flight. Everything Claude can do on your computer, files, browsers, tools are
reachable from wherever you are. Now one constraint Felix writes is that your desktop has to be running.
Claude code PM Noah's we've been also talked about dispatch. Coolest abilities he writes,
one send files from local machines so you can work on PowerPoints on the go, two spawn sub sessions
on desktop that you can drill down on, three chat about any local co-work session. In the docs they
explain a little bit more about how this works. Anthropic writes, instead of starting a new session
for each task, you have a single persistent thread with Claude. This thread doesn't reset.
Claude retains context from previous tasks so you can pick up where you left off,
message Claude from your phone on the way to work, then follow up from your desktop when you sit
down. It's the same conversation, same context wherever you reach it. When you assign a task,
Claude figures out what kind of work is needed and spins up the right session. Development tasks
run in Claude code, knowledge work runs in co-work. These sessions appear in their respective
sidebars. You can click into any session for details or wait for the result in the thread.
Claude messages you the outcome, a spreadsheet, a memo, a comparison table, a pull request,
rather than showing you every step of the process. You'll get a push notification on your phone
when a task is done or when Claude needs your go ahead. Now, like with remote control,
the power users who really started to dig into dispatch found that it was not just a shift in
scale, but a shift in kind. Pavel Heron writes, Dispatch didn't fill my dead time. It changed
how I structured my day. I went to the jump arena with my kid because I could direct work
async from the sidelines. The model isn't grind during gaps. It's designed your day differently
because the work runs without you sitting in front of it. He also wrote an article after 48 hours
of experimenting. And one thing that he points out is that Dispatch is not Claude chat on your phone.
Dispatch he writes is an orchestrator. From a single conversation on your phone,
you spawn and manage multiple co-work task sessions running simultaneously on your desktop.
Each session runs independently. Its own context, its own file access, its own connectors.
Your phone is the command chair. Your desktop does the heavy lifting. Think of the difference
between texting someone a request and sitting in a control room with multiple screens.
Each screen is a task session running on your desktop. Your phone directs them all from one
conversation thread. So what are the types of tasks he did? During morning coffee at home,
Pavel started with pull the latest competitor updates and summarized changes since last week,
as well as draft the sponsor collaboration page using notion database. While he was walking
the dog, he checked task one, the competitor summary. He followed up with added comparison
table against our current roadmap. The redirect he points out took 10 seconds one-handed.
While in the passenger seat with his wife driving, he reviewed the new sponsor notion page.
Too formal, he said. Pull the engagement metrics from the last campaign and make the value
proposition sharper. He also started task three, gap analysis on the article draft.
While at the jump arena with his kids, he started working on infographic iterations.
Move the icons left, change the color of the third section. Finally went back at his desk,
everything was waiting and he reviewed adjusted and shipped.
Ultimately, he writes the actual direction time across all of these gaps was maybe 25 minutes total.
The clawed execution running in parallel was three plus hours of work.
Even Mala also had a positive experience with dispatch. He writes, after using it a bit,
clawed co-work dispatch covers 90% of what I was trying to use OpenClaw 4,
but feels far less likely to upload my entire drive to a malware site.
What I like better, he writes, easy, much more stable and safe existing connectors,
mean better integration with Gmail browsers, etc., very good tool use.
What's missing for me? The ability to invite clawed to any channel, the heartbeat and proactivity
and the multiple sessions.
Why is there always a meeting bot in your Zoom call?
Blame Recall.ai. Recall.ai powers the meeting bots and desktop recording apps behind
products like Cluley, HubSpot and ClickUp. They handled the hard infrastructure work,
capturing clean recordings, transcripts and metadata across Zoom,
Google Meet, Microsoft Teams, in-person meetings and more, so developers don't have to build
it themselves. If you're building a meeting notetaker or anything involving conversational data,
Recall.ai is the API for meeting recording.
Get started today with $100 in free credits at Recall.ai slash AIDB. That's Recall.ai slash AIDB.
If you're building anything with voice AI, you need to know about assembly AI.
They've built the best speech to text and speech understanding models in the industry,
the quiet infrastructure behind products like Granola, Dovetail, Ashby and Cluley.
Now, as I've said before, voice is one of the most important modalities of AI.
It's the most natural human interface, and I think it's a key part of where the next wave
of innovation is going to happen. Assembly AI's models lead the field in accuracy and
quality so you can actually trust the data your product is built on.
And their speech understanding models help you go beyond transcription, uncovering insights,
identifying speakers, and surfacing key moments automatically.
Its developer first, no contracts, pay only for what you use and scales effortlessly.
Go to assemblyai.com slash brief, grab $50 in free credits, and start building your voice AI
product today. If you're an operator, your day is a non-stop stream of decisions,
and most of them require you to look at the data. You don't need another dashboard.
You need answers you can trust fast, but the bottleneck is always the same. The data isn't ready,
it's scattered, it's messy, definitions aren't clear. You're waiting on your data team,
or waiting on domain experts for clarification and confirmation.
That's the bottleneck today's sponsor, PromptQL, is built to break. PromptQL is a trusted AI
analyst for high-frequency decision-making. It connects across warehouses, databases,
SaaS, and internal APIs. No massive data prepercentralization required.
It's built for multiplayer input. Teammates can jump into a thread, correct assumptions,
and nuance, flag edge cases. PromptQL turns everyday conversations into a shared context.
If something is ambiguous, it doesn't guess. It escalates to the right expert,
captures the correct logic, and gets it right next time. That's how it delivers trust
inaccuracy. Over time, PromptQL specializes to your business, like that veteran employee who
just knows things. From simple what is questions to complex what if scenarios,
you can model impact and stress test decisions before you commit, all through a simple natural
language prompt. PromptQL, the trusted AI analyst for teams with shared context and messy data.
You've tried in IDE co-pilots. They're fast, but they only see local silos of your code.
Leverage these tools across a large enterprise codebase and they quickly become less effective.
The fundamental constraint? Context. Blitzy solves this with infinite code context.
Understanding your codebase down to the line level dependency across millions of lines of code.
While co-pilots help developers write code faster, Blitzy orchestrates thousands of agents that
reason across your full codebase. Allow Blitzy to do the heavy lifting, delivering over 80 percent
of every sprint autonomously with rigorously validated code. Blitzy provides a granular list of
the remaining work for humans to complete with their co-pilots. Tackle feature additions,
large-scale refactors, legacy modernization, greenfield initiatives, all 5x faster.
See the Blitzy difference at Blitzy.com. That's B-L-I-T-Z-Y.com.
Now if that was all, that would already be a huge amount of new interactive modalities to
ingest and change how we were interacting with cloud co-work and cloud code. But we are not done
yet. Next up, we got cloud code channels, which is basically cloud code's answer to the
open-call interaction of talking to it via telegram. Anthropics to recrites, we just release
cloud code channels, which allows you to control your cloud code session through select MCPs,
starting with telegram and discord. In the docs they write,
a channel is an MCP server that pushes events into your running cloud code session,
so cloud can react to things that happen while you're not at the terminal.
Channels can be two-way. Cloud reads the event and replies back through the same channel like a
chat bridge. Unlike integrations that spawn a fresh cloud session or wait to be pulled,
the event arrives in the session you already have open. When Larry Vales asked,
how is this better than dispatch? Or it's just an alternative to dispatch?
To recrites, we want to give you a lot of different options on how you talk to cloud remotely.
Channels is more focused on devs who want something hackable. Now just to add a little bit of
clarity here, we see telegram and we just think about ourselves chatting with cloud code.
But the whole idea of channels is that it's not just chats from you, yes that is one type of
interaction, but you can also connect other services like for example, Century Monitoring,
directly to cloud code via those channels in telegram or discord, so that cloud can react even
while you're not there. Damien Galarza writes, instead of cloud pulling data via tool calls,
channels push events into the session from the outside, CI failures, webhook payloads,
monitoring alerts, chat messages from telegram or discord. Anything that can send in post can
now reach your running session. And it definitely seems like of these channels is on the far
the send of the technical spectrum. Dario on Twitter writes, if you missed it, channels are essentially
MCP servers that push events into a cloud code session, letting cloud react to the outside world
beyond the terminal. This was the exact missing piece I needed for an idea I'd been brewing.
My goal was twofold, build a custom orchestration system to spawn cloud code sessions anywhere,
Docker, VMs, pods are locally on my MacBook, and then create a custom app across macOS, iOS,
and iPad OS to control these agents on the go. Fast forward to today, just four days after
channels dropped, and the iOS app is already up and running. I have so many more ideas on how to tweak
and expand this to fit my exact workflow. Now at this point, you'd be forgiven for getting a
little confused because we're talking about all these new ways to use cloud co-work and cloud
code remotely, all with slight variations on the theme. This is kind of why you're going to want
to go hack at all of these to understand how they fit your own use cases. But I think one big
takeaway is an aggregate massive shift in how anthropic is imagining how you're going to be working
with cloud in the future. And effectively, the idea isn't always on context maintaining persistent
interactive orchestration experience where work is happening all the time, even when you're not
doing it, with the right way to interact with that work from wherever you are and whatever
your use case is. In addition to those features, we also saw anthropic go after parity with
OpenClaw when it came to scheduling tasks. At the end of February, co-work got schedule tasks,
think a morning briefing weekly spreadsheet updates or Friday team presentations,
then about a week and a half later we got local schedule tasks and cloud code desktop.
To read said that his favorite use cases to ask it to check error logs every few hours and create
PRs for any actionable errors. Now because these tasks are local, they run as long as your computer
is awake, but then a couple weeks after that, we got recurring cloud-based tasks. Know as we've
been again writes, set a repo or repose a schedule and a prompt. Cloud runs it via cloud infra on your
schedule so you don't need to keep cloud code running on your local machine. He said that so far,
the cloud code team had found it useful for things like sweeping through open PRs,
building features from approved issues, analyzing CI failures overnight, and syncing docs based on
newly merged PRs. And if you're wondering if this had to do with OpenClaw, in another tweet,
Noah said, some might say I built this because I couldn't figure out how to set up my Mac mini.
And yet, in terms of sheer consumer excitement, all of these were just
prelude to what was announced on Monday night, the official cloud account on Twitter writes,
you can now enable cloud to use your computer to complete tasks. It opens your app,
navigates your browser, fills in spreadsheets, anything you do sitting at your desk.
About 16 hours after that tweet went live, 40 million people have viewed it, 62,000 people
have bookmarked it, and the chatter is overwhelming. Anthropics Felix Reesberg writes,
today we're releasing a feature that allows cloud to control your computer, mouse,
keyboard, and screen giving it the ability to use any app. Now he also points out that these
features shouldn't be viewed in isolation. Felix writes, I believe this is especially useful if
used with dispatch, which allows you to remotely control cloud on your computer while you're away.
In their announcement blog and topic rights, when cloud doesn't have access to the tools it
needs, it will point click and navigate what's on your screen to perform the task itself.
It can open files, use the browser, and run dev tools automatically with no setup required.
When using your computer, they say, cloud will reach for the most precise tool first,
starting with existing connectors to services like Slack or Google Calendar. However,
when cloud finds that there is not a connector setup, it can control browser, mouse,
keyboard, and screen to complete different tasks. Now as you might be thinking right now,
this kind of supercharges those features like dispatch andthropic agrees, writing,
with dispatch you can tell cloud to automatically check your emails every morning or post some
metrics every week or spin up a cloud coworker, cloud code session for a report or a pull request.
Clouds a new computer use capability makes dispatch even more helpful. Now cloud can use your
computer on your behalf while you're away. For example, to create a morning briefing while you're
on the train, make changes in your IDE, run tests, and put up a PR or keep your 3D printing project
moving according to your initial plan. A lot of the initial response was fairly breathless.
Sweet bird about rights. Cloud can now control your entire computer with one prompt,
forever. You tell it scan my email every morning once and it just does it forever.
It can also open apps at its spreadsheets, move files, batch process 150 photos in Photoshop,
export PDFs, all of it. You can also start a task from your phone, go to dinner, come back to
finish work like nothing happened. Anthropic is giving us the full desktop agent that uses the
actual screen mouse and keyboard. Not a sandbox, not a simulation, you're real Jarvis.
This is absolutely insane. The LLM era is over. Goggin again who we heard from earlier rights.
Been testing it in co-work mode. The thing that gets me isn't the automation. It's that it
figures out what you actually meant when you said handle this for me. That's the shift. Be
allow rights. Cloud is no longer just a tool that uses the computer. It operates inside it as a
true execution layer. It can replicate everything a human does. Mouse movements, keyboard input,
screen interaction. It can open any application, navigate through it and produce the exact
output you want. This goes beyond APIs. This is full real-world system control, which means from
anywhere you can control your devices and automate workflows and to end. In terms of examples,
a lot of people started fairly simply. Gavin Purcell writes, okay, this is pretty cool. Just had
it grab a photo off my desktop from afar. Dumb and simple yes, but done a natural language with
no weird setup. Daniel Sohn writes, I don't like posting about anything I haven't tried myself,
so I saw the announcement that Cloud Dispatch now controls your computer and I simply asked it to
open Twitter and like a Cloud post. Done. This is getting more interesting and more dangerous by
the minute. I teach Sheth writes, this is the first time a mainstream AI product actually behaves
like a junior ops hire that lives on your laptop. And Peter Gustave points out that this could be
a specially valuable for corporate stuck-in legacy software that isn't going to port into AI
ecosystems natively. He writes, this is another domino to fall, computer use of arbitrary apps,
not just your browser. This is a big deal for a lot of corporates who have custom crappy apps
from 20 or 30 years ago. And summing it all up, Boxes Air and Levy argues that this is to put
it simply a big deal. In a long post on Twitter, he writes, computer use and the ability to write
and run code on the fly are the ultimate primitives for agents to be able to take on more and more
tasks and knowledge work. Most work requires hopping between multiple applications and working with
broad sets of data in a workflow. And agents will need to be able to reverse these systems to be
able to effectively automate any real work in the enterprise. Now, we will have agents that are
the equivalent of having an expert programmer or any number of them that can write code or use any
API to automate whatever work you're doing. Agents will have access to either a user's computer
and resources or their own sandbox to operate in and be able to pull together the tools necessary
to perform the task at hand. This opens up the broadest set of agentic use cases. To be sure,
there are going to be various hurdles around security, permissions and access control,
identity challenges and more. For instance, should the agent always act on behalf of the user,
or should they have their own identity and limited set of access rights? How do you triage security
events when historically volume of activity on a system is no longer a reliable signal of a security
issue? How do you ensure the agent isn't going rogue or getting prompt injected to do something risky?
All problems that need to get figured out. Then there's also lots of work needed to ensure
software is set up to enable agents to operate with their tools in a headless fashion.
This will be an uncomfortable reality for some incumbents, and equally a welcome one for tools
that historically have operated seamlessly via APIs and have business models to support this.
Lots of change coming in the world of work agents and it's going to get pretty wild.
Now we're already going pretty long, but I want to point out that this is not the end of what
has been pushed over the last month. Running quickly through some of the second tier and
quality of life type of announcements, one big complaint that lots of folks have had about
co-work is that there wasn't a conception of a project. Now there is, which will make a huge
difference for day-to-day functionality. Also in March, the Cloud Code team launched code review,
whereas they put it, Clawed Dispatch is a team of agents to hunt for bugs. This past month,
the 1 million token context window actually became available generally for both Clawed Opus and
Clawed Sonnet, why combinators Kerry Tan wrote, I underestimated how powerful Opus 4.6 with 1
million tokens is. Even last year we were absolutely hitting context limit problems constantly.
1 million tokens mean you can do much more complex analysis entirely in context.
Clawed Code is so much better. Meanwhile, over in the main Clawed app,
Clawed can now build interactive charts and diagrams, and we also got upgrades for Clawed
for Excel and Clawed for PowerPoint, including the ability to integrate them together.
They write, when you've got more than one file open, Clawed shares the full context of your
conversation between them. Pull data from spreadsheets, build out tables and updated deck without
re-explaining a step. Skills are also now available inside the Excel and PowerPoint add-ins,
meaning that when your team has some standard workflow, like running a variance analysis or
building a client deck, you can save that as a skill. Other people in your organization can
then run it in one click from their own sidebar. For my free users out there, memory and connectors
are now both available on the free plan, and we also got a whole plug-in marketplace for enterprise
customers. One implication of this is that it's pushing a lot of people to think differently about
just how productive a small team can be. Ethan Mollick again writes,
the ability of the Clawed team to learn from things like OpenClaw and implement features like
this on a daily basis is a very strong argument that, for AI-powered coding teams, a very different
software development process is possible with large strategic implications. I think for most of us,
though, it's just a big old checklist of things we need to try. So to quickly sum up, we've got
computer use, which allows Clawed to control your computer, doing things even in non-native apps,
dispatch, which allows you to initiate sessions and orchestrate tasks from Clawed
co-work from your phone, remote control, which allows you to interact with Clawed Code while on the
go, channels which allow you to port events, impacting your software automatically into Clawed Code,
via channels like Telegram and Discord, and scheduled tasks, which allow you to have things
happening in Clawed or Clawed Code on a scheduled basis either locally or in the cloud. Now for those
of you who want to keep track of this simply, you can go find this checklist with links to
information about all these features at play.ai daily brief.ai. So many episodes now have these
little companion experiences that I'm going to just be putting them there. Again, that's play.ai daily
brief.ai. For now, however, that is going to do it for this episode of the AI Daily Brief. I
appreciate you listening or watching, as always, and go have some fun with Clawed. Peace!
