Loading...
Loading...

Google has been shipping relentlessly across Gemini models, world models, multimodal tools, and Workspace updates, but the release getting the most attention from developers may actually be the new Google Workspace CLI. NLW explains why command line interfaces are suddenly central to the agent era, why developers are rethinking MCP and other abstraction layers, and how Google is quietly positioning Gemini by making its ecosystem easier for agents to use. In the headlines: Meta hires the Moltbook team, Nvidia backs Mira Murati’s new lab, Oracle earnings calm AI infrastructure fears, and Amazon blocks Perplexity shopping agents.
Learn more about AGENT MADNESS: Our 64-Bracket tournament to find the coolest Agent of 2026 https://www.agentmadness.ai/
Brought to you by:
KPMG – Agentic AI is powering a potential $3 trillion productivity shift, and KPMG’s new paper, Agentic AI Untangled, gives leaders a clear framework to decide whether to build, buy, or borrow—download it at www.kpmg.us/Navigate
Mercury - Modern banking for business and now personal accounts. Learn more at https://mercury.com/personal-banking
AIUC-1 - Get your agents certified to communicate trust to enterprise buyers - https://www.aiuc-1.com/
Blitzy - Want to accelerate enterprise software development velocity by 5x? https://blitzy.com/
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Our Newsletter is BACK: https://aidailybrief.beehiiv.com/
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, everything that Google Gemini has launched recently and why
Google workspace CLI is such a big deal, before that in the headlines meta has acquired
multiple.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright friends, quick announcements before we dive in.
First of all, thank you to today's sponsors KPMG, AIUC, Blitzie, and Mercury, to get
an ad-free version of the show, which is just $3 a month, head on over to patreon.com slash
AI Daily Brief, or you can subscribe to Apple Podcasts, to learn more about sponsoring the
show, send us a note at sponsors at AIDailyBreathe.ai.
Quick reminder again that the newsletter is back, it's coming out every day that there's
a show and it has all of the links that I focus on in the show, you can find that at AIDailyBreathe.ai.
And lastly, a new fun project which I will be talking about much more in the days to come,
it is March, March is March Madness season, a 64 contender bracket which leads to one
grand champion in college basketball, or in our case, to a determination of the coolest
agent built this year.
The inflection point we are living through is the agent inflection point, and I want to
see the coolest stuff you guys have built.
So we are going to run a full bracket.
If you go to agentmadness.ai, you can sign up, share your agent for consideration, and if
you are selected as one of the 64, your agent will become a contender, to be known as
the coolest agent of 2026 so far.
Again, you can find out more about that on agentmadness.ai and I will be sharing much more about
it in the days to come.
Now with all that out of the way, let's talk about notebook.
We kick off the day with an interesting one.
You might remember notebook, the social network for agents that went viral a little more than
a month ago.
It was when OpenClaw was first becoming a thing, and in fact it unfortunately caught that
very short middle period between when it was called Claudebot, and before it resolved
on its final name of OpenClaw, when it was called Multi.
MoldBook obviously taking its cue from Facebook as a name was an agent-only social network,
where agents were creating threads, having conversations, all while being observed by humans.
Now we did a big conversation about what it actually meant and what was actually going
on.
Specifically, was this emergent sentience and consciousness, or was this just agents
cosplaying sentient and conscious using the Reddit training data because their humans
had unleashed them on this thing?
Whatever you've felt, it was interesting enough to get lots and lots of agents pointed
in that direction.
For a while it looked like there were millions, although it turned out that people were spamming
the network to show the problems with the network, and as of today there are apparently
a 195,000 human verified AI agents.
It was in other words, fascinating if nothing else.
But now apparently meta has hired the folks behind MoldBook.
Matchlit and Ben Par will be moving into the meta-superintelligence labs, which is the
unit that's run by former scale AI CEO Alexander Wang.
One of the other interesting things about the acquisition is that MoldBook itself was
built largely by Schlitz OpenClaw Claude Claudeberg, making it, I think, probably one of the
first acquisitions for an OpenClaw created site.
In any case, much of the conversation around this is to put it mildly skeptical.
Milo Smith writes,
Now, part of the reason that this is hitting a wave of skepticism is that for the last,
I don't even know how long.
Pretty much all the reporting around meta's AI strategy has been around personalities talent
and personality conflicts.
The most recent wave of that are reports that have suggested a divide between AI CEO Alexander
Wang and other veteran meta executives.
The tension if these reports are correct is around on the one side, said to be represented
by Wang, a research first approach with the goal of developing a leading frontier model,
and on the other side call it a product and integration first approach, said to be represented
by CTO Andrew Bosworth and Chief Product Officer Chris Cox, focused on using meta's data
to build AI that improves existing social media and advertising platforms.
This came to a head with the times of India reporting that meta was done with Wang, although
that article was quickly disavowed by meta and received a full retraction, and Zuckerberg
posted a photo with him and Alexander at meta HQ.
There were some who took this as not just a gimmick.
If you don't understand why Zuck had to get Multbook, one, Zuck believes there are a finite
number of different social mechanics to invent.
Once someone wins at a specific mechanic, it's difficult for others to supplant them without
doing something different.
That comes directly from a Zuckerberg email from 2012, by the way.
Continuing for Kosh writes, Multbook, he believes, has invented one of these social mechanics.
Three, he does not care if 50% of Multbook was prompted by users, in fact that is better
for him because he's more uncertain on AI agent attention value than human attention value.
Four, that a large number of accounts were faked is also irrelevant.
What matters is that every open-cloth instance awakes knowing or finding out that Multbook
is the social site for claws.
Five, in effect, the mimetic gravity of Multbook has been established even though it might
have been faked.
Most people don't agree, but I think that this long-standing belief of a finite number
of different social mechanics to invent is probably what this is about.
Now of course, we'll have to see if anything comes of it, but the duo apparently started
meta next week.
Next up, Miramarati's Thinking Machines Lab has signed a strategic partnership with Nvidia.
The multi-year partnership will see TML deploy at least one gigawatt of compute powered
by Nvidia's next-generation Vera Rubin chips.
TML said this will support their frontier model training and platforms delivering customizable
AI at scale.
Alongside the compute buildout, TML said that Nvidia has made a significant investment in
the company, though no dollar amount was disclosed.
Nvidia has of course made several similar investments in upstart AI labs, backing reflection
in AI, humans, and as well as periodic labs.
This deal is somewhat unique, though, involving the buildout of dedicated compute for TML and
at significant scale.
One gigawatt is around half of OpenAI's total compute as of the end of last year.
At this point, though, it's still far from clear what TML is actually planning, announcing
the partnership, Miramarati said, Nvidia's technology is the foundation on which the
entire field is built.
This partnership accelerates our capacity to build AI that people can shape and make
their own as it shapes human potential in turn.
After they're building, though, TML just got much better access to the resources they'll
need to make it a reality.
Next up, moving over to markets, Oracle has shaken off negative sentiment with a strong
earnings report.
Coming into this week, the latest reports from Oracle was thousands of imminent layoffs
to help fund their massive capex spend.
A big part of the concern was that revenues would lag spending as data centers come online.
Tuesday's earnings call went a long way to settling those fears.
Co-CEO Clay McGorke reported that 400 megawatts of capacity had been delivered in the previous
quarter, with 90% of that capacity delivered on time.
Revenue related to server rental is up 84% year-over-year to reach 4.9 billion for the quarter.
That growth rate was 16 percentage points higher than the previous quarter, and beat analyst
expectations by five points, demonstrating that demand is still accelerating.
Oracle revenue grew 22% compared to last year coming in at 17.2 billion.
Oracle also noted that they wouldn't need to raise more money to fulfill their obligations
noting, most of the equipment needed is either funded upfront via customer prepayments,
to Oracle can purchase the GPUs, or the customer buys the GPUs and supplies them to Oracle.
The stock gained 8% in after hours trading, beginning to reverse the trend that saw the
stock price cut in half since last September when the open AI deal was signed.
Contrary in Curse on XRights, I thought Oracle did a good job on the call.
They did pay to clean picture of why it's not so easy to just slap AI everywhere.
The only rappers that are safe are ones that are embedded onto sticky platforms and workflows
and Oracle fits the bill.
McGorx spoke extensively on the call about why AI isn't killing enterprise SaaS, one
of the quotes.
I've not yet met a customer who tells me they're ready to give away their retail merchandising
system, their core banking system, demand a positive accounting systems, electronic health
record systems, and that sub-small cobbling together of niche AI features are going to
replace all of that overnight.
Yes, we think AI is disruptive, but we think we're the disruptor because we're actually
embedding the AI right into our applications at no additional charge.
Overall, it seems like the market responded to the new co-CEO voice on the call, Jake
Eis writes, they need to lock Ellison in a cage.
This felt like a far different Oracle.
Lastly today an interesting legal battle, Amazon has won a court order blocking perplexity
shopping agents from their platform.
Last November, Amazon filed a lawsuit against perplexity claiming their bots had fraudulently
accessed the Amazon marketplace in breach of terms of service.
The allegation was that perplexity was misrepresenting the nature of the traffic to circumvent
web scraping controls.
Amazon noted that perplexity's agents take control of a user's account, arguing that
this poses a serious security risk.
Perplexity meanwhile argued that their bots were acting on behalf of users, and should
be treated identically to human traffic.
On Tuesday, a judge granted a temporary injunction to prohibit the activity ahead of trial.
They wrote in their decision, Amazon has provided strong evidence that perplexity through
its comment browser, accesses with the Amazon user's permission, but without authorization
by Amazon, the users password protected account.
According to the legal standard to issue an injunction, the judge added that Amazon
has shown a likelihood of success on the merits of its claim.
Now as this case continues, it could have pretty significant ramifications for agent
to shopping.
Primarily, Amazon is arguing that they should have control over how users access their platform
including the right to block third party agents.
However, they also discussed the advertising implications of agent traffic.
Amazon said that perplexity's agents were served ads, which led to contractual issues
with advertisers who only pay for human impressions.
If Amazon is successful, they could set a precedent where Marketplace websites have the
ability to force customers to use first party shopping agents, which some think would
be stifling competition in the stillness and vertical.
Perplexity for their part says that they will, quote, continue to fight for the right
of internet users to choose whatever AI they want.
Super interesting stuff and more on this to come, but for now, that is going to do it
for today's headlines.
Next up, the main episode.
Agenda AI is powering a $3 trillion productivity revolution.
And leaders are hitting a real decision point.
Do you build your own AI agents, buy off the shelf, or borrow by partnering to scale faster?
KPMG's latest thought leadership paper, Agenda AI Untangled Navigating the Build by
Orbaro Decision, does a great job cutting through the noise with a practical framework
to help you choose based on value, risk, and readiness.
And how to scale agents with the right trust, governance, and orchestration foundation.
Don't lock in the wrong model.
You can download the paper right now at www.kPMG.us slash Navigate.
Again, that's www.kPMG.us slash Navigate.
Quick update on something I've been following.
AIUC1 is the first real standard for AI agents, developed with Fortune 500 security leaders
to basically define what safe enterprise-ready AI agents should look like.
A little while back, I mentioned that 11 labs became certified against AIUC1.
This week, two more big players joined, Finn from Intercom and UI Path.
If that certification means in practice is real-time guardrails that block unsafe responses,
protection against manipulation, and a full safety stack designed for enterprise environments.
And that's why this matters.
You've now got leaders across three major AI agent categories, enterprise automation, customer
support, and voice all certifying against the same standard.
That starts to look less like a one-off and more like the beginning of a real industry trend.
To learn more about the world's first AI agent standard, go to AIUC-1.com.
If you're looking to adopt an agent at SDLC, Blitzy is the key to unlocking unmatched engineering
velocity.
Blitzy's differentiation starts with infinite code context.
Thousands of specialized agents ingest millions of lines of your code in a single pass,
mapping every dependency.
With a complete contextual understanding of your codebase, enterprises leverage Blitzy
at the beginning of every sprint to deliver over 80% of the work autonomously.
Enterprise grade and to end-tested code that leverages your existing services, components,
and standards.
This is an AI autocomplete.
This is spec and test-driven development at the speed of compute.
Schedule a technical deep dive with our AI experts at Blitzy.com.
That's B-L-I-T-Z-Y.com.
This podcast is brought to you by Mercury, banking designed to work the way modern software
does.
One thing I've always found weird as a founder is that almost every tool you use to run
a company is modern.
Your analytics tools, your email tools, your AI tools, they all feel like software built
in, you know, the last decade.
Then you go to banking and suddenly it feels like you've time traveled back to the
70s.
That's why I use Mercury.
It's business banking that actually works like the rest of the tools founders rely on.
Clean interface, everything where you expect it, and basic things like wires, cards, or
permissions taking a couple clicks instead of a phone call in three forms.
For the whole AIDB ecosystem, it is just dramatically simpler.
You can see everything from the dashboard, control spend, and give the right people access
without handing over the whole account.
If you run a company and you're tired of banking feeling like the one tool that never
modernized, check out Mercury.
Visit mercury.com to learn more and apply online in minutes.
Mercury is a fintech company, not an FDIC-insured bank.
Banking services provided through choice financial group and column A members FDIC.
Welcome back to the AI Daily Brief.
In all of the conversation around and propic in their fight with the Pentagon, as well
as their insurgent growth and revenue and what it means for their competition with Open
AI, as well as just a broader AI coding conversation between codex and cloud code, Google
and Gemini, which had such powerful tailwinds coming into the beginning of this year, has
had relatively less narrative space that I think many of us might have imagined would
be the case.
And yet, the company has been absolutely furiously shipping.
This year, for example, we have of course gotten new models.
We got Gemini 3.1 Pro, as well as Gemini 3.1 DeepThink, and Gemini 3.1 Flash.
We also got NanoBadana 2.
NanoBadana 2, you might remember, came with both better infographic reasoning and text
rendering capabilities, but also just a big upgrade and speed.
And then there was maybe my favorite thing just from a sheer, the future is so cool perspective,
which was a testable version of Genie 3.
Genie is Google's world model, and while we had seen some very impressive demos of it
before, we hadn't actually had a chance to try it out.
But now in just about a minute of waiting, I can be walking through a pirate colony during
the golden age of piracy.
It's only for 60 seconds, but it's still a really fun and cool way to get a sense of what
might be coming.
You might remember that when this was released, the very beginning signs of the SaaS apocalypse
on Wall Street, as investors started to tank gaming company stocks.
Across all of these different announcements, I think Google's strategy for AI competition
starts to become visible.
One aspect of it is absolutely multimodality.
Google is competing on not only text, but images, videos, and even world models.
Additionally, they're pushing for some very advanced and scientific use cases, which
are more outside the consumer or even business work context mainstream.
Another pillar of the strategy, I think, is also deep integration with the context they
already have about you, and that's where a bunch of the recent announcements that we're
going to cover today come in.
Despite how powerful some of these new models are and how cool the Genie 3 demo is, the
release that I have seen get by far the most chatter is the Google workspace CLI.
And this, of course, speaks to just how important the coding use case is right now in driving
the AI industry forward.
For those of you unfamiliar, CLI stands for command line interface.
It's basically a text-based way to talk to a program through your terminal.
CLIs have been around forever and are the backbone of how developers interact with tools.
If you want to use Stripe or AWS or almost any other developer tool, there's a CLI
for it.
Something like Stripe create payment and terminal and it just works.
CLIs recently have become even more important as the better portion of agent to coding has
been happening inside the terminal through harnesses like cloud code and codex.
You're not clicking around in some gooey you're sitting in the command line talking to an
AI that can execute commands.
So if you are an agent builder and you want to integrate a new vendor, the path of least
resistance is that the vendor has a CLI and your coding agent already being in the
terminal can just run the commands.
No new protocol to learn, no new integration layer to build.
The Google of course has a lot of tools and spaces that agents might want access to drive
Gmail, calendar sheets, dot, etc.
And up until recently, a lot of folks were defaulting to use something called GOG CLI
that was built by Peter Steinberger, the same guy who built OpenClaw.
It was a very big deal then.
When last week, Google dropped the official Google workspace CLI.
Mickey on Twitter points out the enthusiasm, your OpenClaw, CloudCowork and Proplexity
computer agents just got a bit more useful.
Monica explained the value in simple terms.
Agents can instantly read and summarize emails, draft and send replies, schedule meetings
automatically, search drive for files, create sheets from raw data, generate docs and reports,
organize drive files all from one agent workflow.
Matt Silverlock noted the surprise of the oldest new again feel of this.
He writes 2026 is the year of the Chex Notes CLI?
And Leon on X reframes it this way.
They write Google isn't shipping a CLI for developers.
They're shipping an API for agents that happens to also work for humans.
Google's Justin Ponell, who built the CLI, wrote a long blog post about it called You
Need to rewrite your CLI for AI agents.
He writes, I built a CLI for Google workspace.
Agents first, not build a CLI that noticed agents were using it.
From day one, the design assumptions were shaped by the fact that AI agents would be the
primary consumers of every command, every flag, and every byte of output.
Agents are increasingly the lowest friction interface for AI agents to reach external
systems.
Agents don't need GUIs.
They need deterministic, machine readable output, self-described schemas they can introspect
at runtime, and safety rails against their own hallucinations.
You then go on to write a whole bunch about the technicals behind this.
Interestingly, a couple days later, he also wrote a piece about why for some there have
been a shift away from MCPs and back towards CLIs.
And before we actually read what he had to say, there's some evidence that this is a
broader phenomenon.
Layton Spaces swix recently ran a poll, let's say you are an agent builder and want to integrate
a promising new vendor you found.
What would you be happiest to see in the docs?
Not based on Twitter hype, you personally for your situation right now.
The options were API, MCP, CLI, or skills.md.
Out of 769 people voting, MCP was actually in last place with just 9.1%.
A traditional API was number one with 39%, followed by CLI with 31.2%, and a skills.md
markdown file at 20.5%.
Swix points out there was a time in 2025 when MCP would have been the clear number one on
this list.
In his blog post, the MCP abstraction tax just insums up the issue this way.
Every layer, data to API to MCP, introduces an abstraction tax.
Humans need simplified abstractions to manage cognitive load.
LLMs can navigate a complex CLI via help and call precise APIs in seconds.
MCP and CLIs optimize for different things, understanding what each one costs you as
more useful than picking a winner.
For complex enterprise APIs, the fidelity loss at each layer compounds in ways that matter.
Basically, he says every protocol layer between an agent and an API is a tax on fidelity.
That tax is sometimes worth paying, but you should understand what you're giving up at
each layer because the cost compounds.
Kanika again sums it up this way.
Most AI integrations use MCP servers, but MCP loads tons of tools into the context window.
The developer measured 142 tools loaded, 37,000 tokens consumed, and 20% of context gone
before work even starts.
The CLI solves this differently.
Instead of loading tools into context, the agent simply runs commands like GWS drive
files list.
The CLI returns JSON and the agent continues.
No context window tax.
The takeaway is not that CLI is always better than MCP, but more that we're still in the
midst of the AI tooling transition.
Everyone right now continues to experiment as things evolve, with how to use old tools
and systems, repurpose for agents, versus building new layers of infrastructure.
That is a process that's ongoing, but the big deal about Google officially having a workspace
CLI is that they are now playing at the very heart of that space and making it much easier
for agent builders to interact with what is a very important suite of tools.
Going back to Google and Gemini's strategy that I was talking about at the beginning,
this is an example of them leveraging their existing distribution network in ways that
are distinct for the agent era.
The next update is one that came just this week.
Google AI Studios Logan Kill Patrick writes, introducing the new Gemini-powered Docs,
Sheets, Slides, and Drive experience featuring AI overviews, fully editable AI-made slides,
and new grounding sources to make writing Docs context aware.
Sundar Pichai announced it this way.
New Gemini updates to make Google workspace more personal, helpful, and collaborative.
Choose your sources and create a doc draft in seconds, build complex Sheets 9x faster,
or generate on-brand slide layouts with a simple prompt.
Plus, DriveNow generates summarized answers right at the top of your search results so
no more ticking through folders.
The blog post about this pitches it as a speed thing, but I actually think that there's
something else going on here.
The post reads, we've all been there.
The blinking cursor, the empty spreadsheet, or the first blank slide.
Whether you're planning a trip, organizing an event, or launching a side project, getting
started is often the hardest part.
Today we're making Gemini and Docs, Sheets, Slides, and Drive more personal, capable,
and collaborative to help you get things done faster.
When you select your sources, Gemini can now pull relevant information from your files,
emails, and the web to securely connect dots and uncover useful insights while keeping
your information safe guarded.
When you look at the specific examples, though, a lot of the focus is on better access to
the context that makes Google so powerful.
So when you click on Create a Document with Gemini, you're going to be able to select
the sources in your Google ecosystem that it can pull from, and it's that sort of integration
that makes the experience so much smoother, and hopefully makes the content on the other
side that much better.
The spreadsheet example they have asks for help tracking income for a particular month,
and again, can pull from relevant sources like previous spreadsheets that live in Google
Drive.
Point being that while they're pitching it as a speed play, the underlying idea here
is better integrating the context that makes doing things from within your Google workspace
so much more valuable.
The sum totality of the documents that you have in your Google workspace is something
that anthropic and open AI can't compete with.
It is a major advantage for Google and for Gemini.
Not only if they make that context accessible, and that I think is what this update is
about.
I also don't think it's an accident that this comes right after Microsoft announced some
big updates to their M365 suite with co-pilot co-work.
Mustafa Akinse says, the office suite wars just became the AI agent wars.
Both companies know whoever wins productivity wins everything.
Another announcement from this week that further demonstrates Google's focus on multimodality
at the core of their strategy is their updated embedding two model.
Businesses are basically the system that allows AI to find the right information.
In traditional computing searches done by keywords, if you search for buy a car, it's going
to look for those exact words.
Embeddings on the other hand let the system understand that buy a car purchase a vehicle,
get a new ride, or all basically the same request.
Instead of matching words, they help AI match meaning.
That means that when you're building an AI system that has things like search or co-pilots
looking through company documents or chatbots answering questions from knowledge bases,
the system uses embeddings to quickly figure out which documents files or pieces of information
are actually relevant.
What makes embedding two a big update is that it is natively multimodal.
So previously, if you had an image, a chart, or a slide, the system would have to convert
it into text first, usually by generating a caption and then search using that.
Multimodal embeddings remove that conversion step.
Gemini embedding two can understand and retrieve images, diagrams, screenshots, text altogether.
So if you asked a question in a company knowledge base like where do we talk about redesigning
the checkout page, theoretically embedding two could pull up a Slack conversation, a
product spec document, a screenshot of the old UI, or a slide from a meeting all as relevant
sources.
This is the type of announcement that's not going to get nearly as much attention as for
example a big Genie 3 demo, but which brings very significant functionality upgrades to
this new agentic era.
The TLDR on all of this is even as tons and tons of ink or spill talking about the open
AI versus anthropic fight and all of these important things going on, Google Gemini is
quietly just releasing feature after feature and product after product, all pointed in
similar directions that played at the company's main strengths.
And to leave you with one recommendation, just purely for your own enjoyment.
If you haven't yet, go check out the recently released video generation feature in notebook
LM.
People are having tons of fun with it, as witnessed by this recent video from Ethan Mollock,
do a deep research report and make a video telling me exactly how to take over Rome if I
time travel to 66 BC with a single backpack.
As Ethan puts it, actually pretty fun to watch and gets a lot of historical details in as
well.
For now guys that is going to do it for today's AI Daily Brief, appreciate you listening
or watching as always and until next time, peace!
The AI Daily Brief: Artificial Intelligence News and Analysis
