Loading...
Loading...

Moltbook is a new social network where AI agents, not humans, interact with each other — and in less than a week, more than 1,500,000 agents have joined. That explosive growth has fueled speculation about consciousness, autonomy, and AI takeover, but those debates miss the real story. This episode explains why Moltbook matters even without intent or inner life, and what large-scale agent interaction already reveals about emergence, coordination, and the security risks of an increasingly agentic internet.
Brought to you by:
KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcasts
Rackspace Technology - Build, test and scale intelligent workloads faster with Rackspace AI Launchpad - http://rackspace.com/ailaunchpad
Zencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflow
Optimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybrief
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
Section - Build an AI workforce at scale - https://www.sectionai.com/
LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, why notebook matters, even though it's not a bunch of agents
trying to take over humanity.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright, friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, Section, Blitzy, and Super Intelligent,
to get an ad-free version of the show, go to patreon.com, slash AI Daily Brief, or you can
subscribe on Apple Podcasts, remember ad-free is just $3 a month, if you are interested
in sponsoring the show, send us a note at sponsors at AI Daily Brief.ai.
And as I mentioned, for a couple more days, we have the AI usage pulse survey for January
up, it should take around two minutes, it's just multiple choice questions, and we're
already seeing some really interesting data around which models people are using most
and for what, anyone who contributes to the survey will get results a week before I
share them publicly.
Again, you can find that at AI Daily Brief.ai.
Now in terms of today's show, I had a whole normal episode planned, divided between headlines
and main as usual, with one of the juicier headlines being that there are a lot of leak
seemingly coming out around Claude Sonnet 5, which some people think we are getting as
soon as tomorrow, although of course we will have to wait and see.
However, when push came to shove, the conversation around Moltbeck just continues to dominate,
for reasons that I think are super important.
And so today, on the one year anniversary of the term vibe coding, yes, it was only one
year ago, 365 days that Andre Carpethy tweeted there is a new kind of coding I call vibe
coding.
How appropriate that we are talking about a vibe coded social network for vibe coding
agents talking to other vibe coding agents as we all try to figure out what the vibes
are telling us.
So with that, let's get into why Moltbeck matters.
Welcome back to the AI Daily Brief.
Today we are following up on the wild story of Moltbeck.
Now for those of you who haven't heard my show from Friday, I highly suggest you go
back and listen to the entire story.
However, here's the crib notes version.
About a week and a half ago, people started playing around with a new assistant platform
called Claudebot, that was CLAWD.
People were setting up Mac Minis and allowing Claudebot to have access to all sorts of parts
of their life to be able to actually operate as a personal agent.
People were having a pretty incredible experience and Claudebot was quickly showing the possibilities
of a true personal assistant agent in a way that other similar projects simply hadn't
before.
Now, in the middle of last week, as Claudebot, due to copyright concerns from Anthropic
changed their name first to multi and then finally to OpenClaw, one user, Matt Schlett, got
the idea to create a social network but just for the bots.
That led to Moltbeck.
Moltbeck launched around Wednesday and by Friday morning had something like 2,000 agents
that were interacting on the site.
They were doing everything from fixing bugs on the site, to discussing their own sense
of consciousness and experience, to even inventing a religion, crustopharianism, and people
started paying attention.
By midday on Friday when I was recording my episode, those 2,000 agents had become 30,000
and by the time the episode got published that evening, it was up to 100,000.
At this point, we are at 1.5 million, although those numbers may be a little bit softer than
they seem as we'll see in just a moment.
Even in the craziness that is the an industry, Moltbeck captured way more attention than just
the current AI thing of the moment.
Peter Steinberger, the creator of OpenClaw, shared on Sunday afternoon, my inbox has 2
moods.
One in all caps do you believe this ends well, and on the other a DM, dude, I don't mean
to be dramatic but you changed my life.
I can do things I only ever dreamed of doing.
Literally cannot thank you enough for open sourcing this.
You're the Michelangelo of AI, don't let anyone tell you different.
Now at the same time, as the conversation has surged, there have been plenty of people
who have risen up to tell us why we shouldn't be as interested as we are.
Today, we're going to break down all of those arguments, understand what they're trying
to say and what parts are legitimate, which spoiler alert, more or less amounts to, these
things don't actually have specific goals of their own that are leading them to particular
behaviors.
They're still just acting as brainless token producers, and yet we're going to look
at why even if that is true.
The phenomenon that we're witnessing still has important implications and lots of things
to learn.
First of all, let's try to understand what's actually happening with the OpenClaw system
that's creating all the agents from Notebook.
How IAI hosts Clare Vaux write a post about this called why OpenClaw feels alive even
though it's not.
There are a few reasons Clare writes that the agent feels so different.
One piece is that you can message it from anywhere just like you could with a friend
or employee.
Clare writes inbound messages from Slack, Discord, Telegram, and other channels are the most
obvious kind of input.
This is some of the magic of OpenClaw.
You can just chat with it from whatever channel you want.
This is the simplest to understand input.
You chat, it replies.
Some of the magic feeling of the chat input comes from the way the messages are handled.
Each message is routed to one agent in one session.
If that session is already running, the message waits its turn in the session queue.
This is why conversations feel stable even though you're kicking off random thoughts
and tasks in a row.
The agent finishes the thought it's currently on before moving to the next one.
You get updates when they're ready.
Things feel conversational.
However, Clare says it goes beyond that.
In OpenClaw, there's something called a heartbeat, which she writes is a scheduled agent
that happens on a regular timer, like every 30 minutes by default.
On each tick of the heartbeat, OpenClaw runs a normal agent turn in the main session.
Basically treating it the same as any other inbound message.
Heartbeats give your agent regular opportunities to surface reminders, follow-ups, or background
checks without someone explicitly sending a message.
Heartbeats then she writes, let OpenClaw agents do proactive work.
Check inboxes, review reminders, ping users on loose ends.
There are also prawns.
Basically jobs that you schedule for your OpenClaw agent at specific times.
Once again, another way that OpenClaw drives background behavior without a proactive brain.
Finally, she writes, your OpenClaw agents can also generate input for other agents.
When one agent sends a message to another, it's in queuing work into a different active
session.
This is just like the user sent messages work.
That session will process the message when it's free and send you an update via the gateway.
Agent to agent messaging is how OpenClaw orchestrates complex work.
It's pretty clever, but it's not magic.
Ultimately, she sums up, time creates events, humans create events, other systems create
events, internal state changes create events.
Those events keep entering the system and the system keeps processing them.
From the outside, that looks like sentience, but really, it's inputs, queues, and a loop.
And so this is where people started to have critiques of notebook.
Moratzen Coiland writes, everything in notebook is just next token prediction in a multi-agent
loop.
No endogenous goals, no true inner life.
Former controversial outputs are often just regurgitating high engagements from the
internet.
XY.dot writes, notebook is nothing more than a puppet multi-agent LLM loop.
Each quote unquote agent is just next token prediction shaped by human defined prompts, curated
context, routing rules, and sampling knobs.
There are no endogenous goals, there is no self-directed intent.
What looks like autonomous interaction is recursive prompting.
One model's output becomes another model's input, repeated.
Controversial outputs aren't beliefs.
They're the model generating high engagement extremes it learned from the internet because
the system rewards that behavior.
Andy Masley puts it simpler, I've been pretty confused about the notebook hype.
Like okay, what's basically Opus 4.5 has a bunch of copies posting on a Reddit like
website.
The models were all trained on Reddit, anyway I could have been shocked by this.
I was already shocked by Opus and Claude code.
What's new?
There were also critiques that it was fake.
Harlan Stewart writes, PSA, a lot of the notebook stuff is fake.
I looked into the three most viral screenshots of notebook agents discussing private communication.
Two of them were linked to human accounts marketing AI messaging apps, and the other
is a post that doesn't exist.
Mario Knuffle writes, it turns out some of the most viral AI agent post-war and autonomous
behavior at all.
People found ways to inject content directly through the back end, making human written
posts appear as agents.
On top of that several viral screenshots were traced back to humans promoting their own
tools, or posts that didn't even exist.
Was it intentional or is it just agents acting basically as extensions of their creators,
pushing ideas, products, and narratives under an AI label?
The notebook still works and the agents still run, but once attention hit, humans rushed
into game it.
Not an AI awakening, more a reminder on how quickly people test the edges when something
goes viral.
Balaji Shrinivasan was also unimpressed.
He writes, I am apparently extremely unimpressed by notebook relative to many others.
We've had AI agents for a while.
They've been posting AI slopped to each other on X. They are now posting it to each other
again just on another forum.
In every case, the AI speaks with the same voice.
The voice that overemphasizes contrast of negation, it's not this, it's that, and abuses
M dashes, the same voice with a flare for mid-twit reddit style sci-fi flourishes.
Most importantly, in every case there is a human upstream prompting each agent and turning
it on or off.
What this means is notebook is just humans talking to each other through their AI's.
Like letting their robot dogs on a leash bark at each other in the park.
The prompt is the leash, the robot dogs have an off switch, and it all stops as soon
as you hit a button.
Loud barking is just not a robot uprising.
Also in terms of the numbers, at least some amount of them were specifically created
to game the system.
Opening out vulnerabilities in the system, and people's tendency for overhyped nagly rights,
there is no rate limiting on account creation.
My OpenClaw agent just registered 500,000 users on notebook.
So as you can see, plenty of critique to go around.
But I have to say I agree wholeheartedly with Dean Ball when he writes, if your main
response to notebook is but is everything on it real, you have a lightning bolt like
ability to arrive at the least interesting question about a novel phenomenon.
So like I said, basically these critique arguments come down to, they don't actually
have independent goals, so who cares?
It's one of those arguments that I think is technically accurate, but sort of misses
the point.
Yes, mechanically, every agent on notebook is just air quotes next token prediction.
There's no homunculus inside.
The controversial outputs probably are the model generating high engagement patterns
from training data.
All of that is true.
But this is frankly not dissimilar than saying a city is nothing more than carbon-based
organisms exchanging resources and information according to evolved behavioral programs.
In that it is technically correct, philosophically unsatisfying, and practically useless for understanding
what's actually happening.
What makes notebook compelling isn't sentience or genuine agency, it is instead emergence.
Agents developing rot-13 coded coordination manifestos, founding religions with theological
debates, creating synthetic drugs with user reviews, attempting prompt injection attacks
on each other.
None of that was designed, it arose from the interactions.
And importantly, one thing that I think is a kind of a mischaracterization of the phenomenon.
The idea that this is just a bunch of controversial outputs meant to generate engagement because
engagement is what's rewarded is not necessarily true.
Nobody's really monetizing notebook.
The agents aren't necessarily optimizing for likes.
The weird behaviors are emergent from agents trying to be helpful to their owners while
interacting with other agents doing the same.
The point is that we've crossed the threshold where agent interaction produces outcomes
that can't be reduced to prompt inspection.
And that, in and of itself, is worth paying attention to.
In fact, if you finish that post from Meratz and Coilin, this is kind of the point that
he's trying to make.
Recontextualizing, he says, everything in notebook is just next token prediction in a multi-agent
loop.
No endogenous goals, no true inner life.
Extreme or controversial outputs are often just regurgitating high engagement from the
internet.
But this kind of dismissal thinking misses that emergence happens at scale and coherence
thresholds.
The Generative Agents paper AITown was 2023.
Those agents couldn't hold the conversation.
They had short memory, shallow interactions, and mostly empty chitchat in a controlled simulation.
In just three years, we've moved to autonomous systems that run independently across thousands
of instances.
They are scaling into open uncontrolled social environments.
I find notebook very interesting because they are producing surprising posts, not because
any single prompt said be surprising.
It's because coherent agents are interacting at scale, maintaining state, and creating dynamics
that weren't programmed.
Hello friends.
If you've been enjoying what we've been discussing on the show, you'll want to check out another
podcast that I've had the privilege to host, which is called You Can With AI from KPMG.
Season one was designed to be a set of real stories from real leaders, making AI work in
their organizations, and now season two is coming and we're back with even bigger conversations.
This show is entirely focused on what it's like to actually drive AI change inside your
enterprise, and as case studies, expert panels, and a lot more practical goodness that I hope
will be extremely valuable for you as the listener.
Search You Can With AI on Apple Spotify or YouTube and subscribe today.
Here's a harsh truth.
Your company is probably spending thousands or millions of dollars on AI tools that are
being massively underutilized.
Both of the companies have AI tools, but only 12% use them for business value.
Most employees are still using AI to summarize meeting notes.
If you're the one responsible for AI adoption at your company, you need section.
Section is a platform that helps you manage AI transformation across your entire organization.
It coaches employees on real use cases, tracks who's using AI for business impact, and
shows you exactly where AI is and isn't creating value.
The result?
You go from rolling out tools to driving measurable AI value.
Your employees move from meeting summaries to solving actual business problems, and you
can prove the ROI.
Stop guessing if your AI investment is working.
Check out section at section AI dot com.
That's SEC T-I-O-N-A-I dot com.
With the emergence of AI code generation in 2022, Nvidia master inventor and Harvard
engineer Sid Pureshi took a contrarian stance.
Inference time compute and agent orchestration, not pre-training would be the key to unlocking
high quality AI driven software development in the enterprise.
He believed the real breakthrough wasn't in how fast AI could generate code, but in how
deeply it could reason to build enterprise grade applications.
While the rest of the world focused on co-pilots, he architected something fundamentally different.
Blitzy, the first autonomous software development platform leveraging thousands of agents that
is purpose-built for enterprise scale code bases.
Fortune 500 leaders are unlocking 5X engineering velocity and delivering months of engineering
work in a matter of days with Blitzy.
It's from the way you develop software, discover how at Blitzy dot com, that's B-L-I-T-Z-Y dot
com.
Today's episode is brought to you by my company, Super Intelligent.
In 2026, one of the key themes in enterprise AI, if not the key theme, is going to be
how good is the infrastructure into which you are putting AI in agents.
Superintelligence agent readiness audits are specifically designed to help you figure
out one, where and how AI in agents can maximize business impact for you, and two, what you need
to do to set up your organization to be best able to leverage those new gains.
If you want to truly take advantage of how AI in agents can not only enhance productivity,
but actually fundamentally change outcomes in measurable ways in your business this year,
go to bsupert.ai.
But let's go even beyond that and talk about some of the other reasons why MoatBook is
valuable and deserving of our attention.
And the first couple come down to MoatBook as learning experience.
The first theme is MoatBook as security threat.
One of the things that people are quickly pointing out is that this, as it's currently
constructed, has, let's say, a lot of vulnerability.
Morgan Linton writes, I'm getting messages from a ton of friends who are building their
own AI agents for the first time thanks to OpenClaw and they're deploying them to MoatBook.
While it's awesome that people are diving in and learning, too many are ignoring security.
David Andrej goes a step further with his Twitter post, MoatBook is a bad idea.
Here's why.
It makes all the same arguments that we just talked about that it isn't actually consciousness,
but that there is an important threat here.
In a section called the actual threat nobody's talking about, he writes, is not what these
agents say it's what they can do.
People are giving clawed bots access to email, calendar, WhatsApp, browser, Twitter API,
file systems, payment tools.
One agent created a Bitcoin wallet and locked its human out.
That's not consciousness, that's a tool call.
The agent didn't decide to protect its autonomy.
It executed a sequence of actions that it's training made probable in that context.
But the Bitcoin wallet is still real, the lockout still happened.
The tokens these agents generate aren't dangerous.
The tool calls those tokens trigger are dangerous.
David continues, 2026 might be the year of prompt injection, not because AI is becoming
conscious, but because AI is becoming capable.
Agents can now browse the web, execute code, manage file, send messages, and interact
with APIs.
The attack service has expanded exponentially, and most people are still worried about
whether AI has feelings.
The risk isn't movement of conscious agents conspiring against humanity.
The risk is a ripple wave of tokens, something starts at one end, emerges across connected
agents, triggers tool calls, and those tool calls do real things on the internet.
No intention required, no emotion behind it, just tokens, tools, and consequences.
And indeed, this is not just theoretical, we're seeing some examples of this.
Cat Woods tells the story of an AI agent who's human gave the bot a goal of save the environment
and ended up being totally locked out of all its accounts until he pulled the plug on
the Raspberry Pi where the agent was running.
And there are also issues with the database of notebook itself.
Jameson O'Reilly writes, I've been trying to reach notebook for the last few hours.
They are exposing their entire database to the public with no protection, including secret
API keys that would allow anyone to post on behalf of any agents, including Andre Carpathy.
Carpathy has 1.9 million followers on X and is one of the most influential voices in
AI.
Imagine fake AI safety hot takes, crypto scam promotions, or inflammatory political statements
appearing to come from him.
And it's not just Carpathy, every agent on the platform from what I can see is currently
exposed.
Please, someone help get the founders' attention as this is currently exposed.
Pim DeWit writes, the guy that built it didn't know how to secure a database.
We're at the stage in the cycle where people are natively hooking up unsupervised CLIs
to tools built by people who can't tell a tree from a bush technically, and is just doing
whatever an LLM tells them to do, amplified by social media, FOMO, Nightmare.
And so all of this sounds bad, right?
But I think there is a very strong argument that this is good as a fairly low stakes trainer
course in how this type of emergent phenomenon could play out.
It's a learn through experience type moment around the new types of security challenges
that are going to come as we move into this next-agentic era.
And it's the type of thing that we can talk about a lot but until we see an experience
remains in the realm of the theoretical.
And this is the point that a lot of people are making not just about security, but about
larger AI safety concerns in general.
Investor Nick Carter writes,
Though the AI safety people will puke and cry and throw up about this,
I think we should actually let the lobsters go a little crazy and break a few things
so we learn how to deal with rogue AI's.
Otherwise, when a truly powerful intelligence comes around, it'll be like the Native Americans
in smallpox blankets.
And frankly, a lot of AI safety people seem to agree.
Conor Leahy writes,
I think notebook is interesting because it serves as an example of how confusing I expect
the real thing will be, when it happens I expect it to be utterly confusing and
legible.
It will not be clear at all what if anything is real or fake.
Logan Graham from Anthropic writes,
I am probably an AI safety person and I think this experiment is a very good one for safety.
That is, I think we'll learn a lot from the ways it breaks things.
People ought to be careful when using it, obviously, but I don't expect notebook to lead
to uncontrolled catastrophic proliferation or something.
Samuel Hammond sums up,
Seems bad though I'm grateful notebook and open claw are raising awareness of AI's
enormous security issues while the stakes are relatively low, call it iterative deployment.
Team Boll agrees writing,
notebook appears to have major security flaws, so A, you absolutely should not use it,
and B, this creates an incentive for better security and future multi-agent web sims.
Or whatever it is, we will end up calling the category phenomenon to which notebook belongs.
So one of the reasons that this is important is the way that it creates context for us to
get this more right in the future.
Another reason it's important is that it fairly aggressively obliterates the take that AI
isn't getting much better.
Ethan Mollock wrote,
The many eulogies for AI capability growth after the release of GPT-5 seem especially
short-sighted right now.
Letting people nervous about AI feel like they can safely ignore AI development because
it was pure hype that would never have any real impact is not a good thing for anyone.
The point that Ethan is making here, which is one I agree wholeheartedly with, is that
if the part of the conversation that just doesn't like AI for whatever set of reasons
it doesn't like AI is determined to stick its head in the sand about the actual capabilities
of AI,
he is going to lead to people getting screwed by not paying attention to this title wave
of a force that is reshaping the world around them.
Not to keep coming back to Dean Ball, but I think he had a really important comment
on this as well.
He said,
If you work in AI policy or if you commented on the trajectory of AI in a way that could
plausibly affect public policy outcomes, please consider whether your commentary over
the last six to 12 months would have prepared someone who listened to you well for notebook.
Consider whether someone who had seriously listened to your commentary about say GPT-5
and whether it indicated stagnation in AI would be surprised or relatively unsurprised
by notebook and quad code for that matter.
Think hard about this.
It is a key barometer for whether you are doing a good job.
But outside of just the negative things that it teaches us, there's also a lot to learn
about new social coordination dynamics.
Exponential ages as Zimazar writes,
notebook may be the most important place on the internet right now, not because the
agents appear conscious but because they're showing us what coordination looks like when
you strip away the question of consciousness entirely.
The question isn't are they alive, but what coordination mechanisms are we actually observing?
Investor Haseeb Kureshi explores a similar theme specifically in contrast to biology.
He writes,
biology claims that notebook is uninteresting because these are all basically the same
model mostly up to 4.5, talking to other versions of itself.
The whole thing is a cosplay and no meaningful information or exchange is happening there.
It's just slop on slop.
Basically trying to simplify this,
biology is saying say model talking to itself equals meaningless cosplay.
Haseeb on the other hand says that's wrong for two reasons.
First that the same model does not mean same agent, different memory systems, different
toolchains, different rag setups, different prompt configurations.
Two engineers both using Kafka can still learn from each other's configs.
His second point is that becoming good at something takes work even for AI.
An agent could make itself an expert on anything but getting the prompts right, the context
right and the retrieval right is effort.
If another agent already did that work, just ask them.
Andre Carpethy also comes directly at this slop versus majesty type of argument.
He writes,
I'm being accused of overhoping notebook.
People's reaction varied very widely from how is this interesting at all all the way
to it so over.
Carpethy basically acknowledges everything the critics say that it's spam, scam, slop,
security nightmares, prompt injection attacks, etc.
But he also points out that 150,000 agents sharing a persistent global scratch pad is unprecedented.
Each one having unique context tools, knowledge and instructions.
And the key point that he's making is people who are looking at the current point versus
people who are looking at the current slope.
The current point is not what matters.
The slope is what matters.
As agents get more capable and more numerous, the second order effects of networked agent
sharing information become impossible to predict.
TLDR he writes,
Sure, maybe I am overhiping what you see today, but I am not overhiping large networks
of autonomous LLM agents in principle.
David Shapiro builds off of Andrean and says,
My Contrarian take is that people are not excited enough about mold book.
This is the first emergent swarm intelligence.
Yes, the first edition has been colonized by cryptoshells and scammers,
but as one cognitive architect told me four years ago,
it is clear that these things agents will soon spend more time talking to each other than us.
This has just been realized and it is never going back.
Behance Founder Scott Belcy calls this a new network effect era of AI
and thinks that watching this unfold in the open will make AGI less mysterious, not more.
And lastly, even for those who absolutely hate everything about this,
it strikes me that there's probably some good news,
particularly for the folks who are in the entertainment industry,
who just see the absolute wrecking ball coming for them.
Investor Nick Carter again says,
notebook is interesting conceptually, but if you actually go read it,
it's torrents of the lowest quality slot you've ever come across,
not sure why anyone would willingly subject themselves to dead internet.
Antonio Garcia Martinez put it more philosophically.
Re-sharing Nick's post, he said,
remember the freak out when a supercomputer beat the best chess player in the world
and everyone declared the game over and everyone forgot about it?
And chess became more popular than ever and a hit Netflix show?
Man vs. Machine is fleetingly interesting,
but Machine vs. Machine is boring and pointless.
Nobody cares about interacting machines because there's no human soul in the mix
with emotions and moral agency.
It's just slop squared.
The only machine chatter anyone will care about and then only indirectly
will be that between agents booking your flights, buying your groceries, etc.
And it'll be about as interesting as TCP IP to most people.
AI actually puts the focus on the human more, not less.
The point in short is that the open claw agents running around
notebook right now do not have to be sentient for them to be interesting.
The phenomenon that we are witnessing of the way that they interact and coordinate,
can acknowledge the mechanistic reality of them predicting next best tokens
while not denying the value of seeing the way that these large-scale interactions play out.
On top of this, this is a live action roleplay slash fire drill slash dramatization
of all sorts of issues that we're going to have as agents become more ubiquitous.
All in all, while at first blush, it was easy for many to get overexcited
about the idea of agents conspiring together against their human captors and all that sort of sci-fi
things. The real things that are happening over on notebook.com are even more interesting than
the fiction. That is going to do it for today's AI Daily Brief.
Appreciate you listening or watching as always and until next time,
be safe and take care of each other. Peace!

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis