Loading...
Loading...

Two stories were too big to squeeze into the headlines, so this episode goes deep on both. First, the surprise merger of xAI and SpaceX and what Elon Musk’s vision of orbital data centers says about the future of AI compute, capital intensity, and sci-fi-scale ambition. Then, a close look at OpenAI’s new Codex desktop app and why it signals a real shift from models competing on raw capability to products competing on how humans actually orchestrate agents at scale.
Brought to you by:
KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcasts
Rackspace Technology - Build, test and scale intelligent workloads faster with Rackspace AI Launchpad - http://rackspace.com/ailaunchpad
Zencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflow
Optimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybrief
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
Section - Build an AI workforce at scale - https://www.sectionai.com/
LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, not one but two main episodes crammed into one.
The first story is that XAI is being acquired by SpaceX, with the second main being all
about OpenAI's new codex app and the shifting paradigm of how we work with agents.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright, friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, Assembly, robots and pencils, Blitzie and
Super Intelligent.
To get an ad-free version of the show, go to patreon.com, such AI Daily Brief, or you
can subscribe directly on Apple Podcasts.
To learn about sponsoring the show, or pretty much anything else regarding the show, you
can go to AIDailyBrief.ai, that's my vibe-coded command center where I keep everything with
frequent changes, frankly.
Right now, the thing that I would like to point you to, which I have been pointing you to
all week, is this January AI Pulse Survey?
I'm going to leave it open for probably just a couple more days.
I basically want to give people better access to actual data around what people are doing
in aggregate.
It's things like which AI models are being used most, then for what use cases.
Give you a little preview right now, and the topic is cleaning up in the most used model
use case.
In any case, anyone who fills this out will be thanked with access to the results a week
before I share it with anyone else.
Mostly though, you're just contributing to better overall knowledge of what's happening
in the world of AI.
Now, the last thing that I want to point out today is that we have the rarest of AI
Daily Brief episodes, which is the double main.
Basically put, there were two stories, which were too big to be shoved in the headlines,
and I'm pretty convinced that we have some more big stories coming down the pipeline
later this week, so I didn't want to wait on them either.
The first is about XAI and SpaceX merging.
The second is about the Codex app, and with that said, let's dive in.
When I was doing my 2026 predictions, one of the things that we were looking at was
the competitive race between the big foundation labs.
In a lot of ways, it was kind of easy to see where all the different players were relative
to one another coming into this year, and Thropic ahead on coding, and it started to
use that to get ahead on Enterprise, open AI still having the best consumer recognition
by far, and seeing pretty steady gains in how consumers were switching, for example, from
Google Search to ChatGPT Search.
But at the same time, really losing out on some of the momentum on the Enterprise and
Coding side to Anthropic, and on the more general side to Google's Gemini, meanwhile
as Google Gemini was coming into the year with the wind in its sales, and then in the
other end of the spectrum, there was Meta, who really needed 2025 to be a rebuilding
year, but at least they had the Ray Bands, and the biggest question mark in some ways
was XAI and GROC.
How are they going to carve out and differentiate their place in this whole race?
Now, one of the possibilities that we discussed was that they could start to leapfrog for
formats because of the investments that they had put into major infrastructure development
in the form of the Colossus clusters, so that was one thing we were keeping an eye on.
The other thing that we were looking for, though, is whether we were going to see synergistic
alignment, let's say, between Elon Musk's various companies.
There were certainly indications that this was a strategy for XAI, given that it had already
combined with X-slash Twitter.
And now the next stage in that Musky and Consolidation has come, as SpaceX and XAI are officially
merging.
The deal values the new entity at 1.25 trillion, with SpaceX valued at a trillion and
XAI at 250 billion.
The previous valuation on XAI was rumored to be 230 billion from their 20 billion series
Z that closed earlier this month, so this deal is more or less flat from that.
SpaceX wrote in its announcement post that it had acquired XAI to quote, form the most
ambitious, vertically integrated innovation engine on and off Earth, with AI, rockets,
space-based internet, direct-to-mobile device communications, and the world's foremost
real-time information and free speech platform.
This marks not just the next chapter, but the next book in SpaceX and XAI's mission,
failing to make a sentient son to understand the universe and extend the light of consciousness
to the stars.
Musky for his part said that the merger was an acceleration of sci-fi technologies that
live at the intersection of AI in space.
In the announcement note he wrote,
My estimate is that within two to three years, the lowest cost-weighted generate AI compute
will be in space.
This cost-efficiency alone will enable innovative companies to forge ahead in training their
AI models and processing data at unprecedented speeds and scales, accelerating breakthroughs
in our understanding of physics and invention of technologies to benefit humanity.
Musky said that orbital data centers would unlock further advances, including self-growing
bases on the moon, an entire civilization on Mars, and ultimately expansion to the universe.
He said that in the long term, lunar manufacturing could allow SpaceX to put 500 to 1,000
terawatts of AI satellites into deep space orbit annually.
For reference, current data center capacity on Earth is around 200 gigawatts.
And for Elon, this is more than just talk.
The short-term focus certainly seems to be on getting the first orbital data centers
off the ground.
Earlier in the weeks, SpaceX filed an FCC application to launch a million AI satellites
into orbit.
Thus far, no one has launched more than a single satellite as a test case, meaning that
it's questionable whether the FCC will approve an orbital data center of this scope.
There are currently around 15,000 active satellites in Earth's orbit, so this network would increase
satellite density by two orders of magnitude.
I'm sure many of you are having visions of Wally when they return to Earth to see the
orbit is entirely covered in space junk.
At the same time, though, SpaceX has been known to begin negotiations with regulators
at a high level so they have room to compromise.
They were recently approved to double the number of starling satellites to 15,000 by 2031,
and their original ask had been 30,000 satellites.
The application itself is a dramatic statement of Musk's ambitions.
It suggests that the orbital data center network would be the, quote, first step towards
becoming a Kardashev level 2 civilization, meaning one that can harness the Sun's full
power.
SpaceX is also arguing that orbital data centers will be cheaper and more environmentally
friendly than Earthbound facilities.
Satellites will be able to vent heat into the depths of space, reducing the need for
active cooling and water use.
They can also harness solar energy at a much greater efficiency due to the lack of atmosphere.
In other words, if future AI technology really does require compute at the hundreds of
terawatt scale, orbital data centers might be the only politically viable solution.
So everyone is just accepting this explanation for the merger on face value, right?
Yeah, not so much.
As always, any news having to do with Elon Musk is as much a Rorschach test about what
people think about Elon Musk as it is an interpretation of the actual news.
And it's not just individuals, but many of the media outlets that think that this isn't
just about orbital data centers.
The information wrote a piece called what's really driving Musk's SpaceX XAI merger.
The piece noted that SpaceX didn't explain how XAI would actually contribute to the
ambition of orbital data centers.
They're obviously a customer, but it's not obvious how an AI company would help bring
that technology to fruition.
Instead, argues the information's Martin Pierce, there's no question the move is financially
motivated.
Musk may be the richest man in the world, but he's facing the same financial realities
the leaders of other AI startups face.
It's very difficult to compete in AI development with deep pocketed tech giants like Google
and Meta, which own cash machines in their advertising businesses.
Now he also noted that the merging of the two startups together won't necessarily solve
issues of profitability.
SpaceX investors said the company generated $15 billion in revenue last year with a profit
of $8 billion.
From noted, however, that this is EBITDA profit, meaning that it excludes taxes and depreciation.
While EBITDA is a fairly well accepted as a non-standard accounting measure for tech
companies, that makes sense where depreciation is fairly low.
The reason that some have an issue with this is that when your main business is rocketry
and satellites, depreciation is likely a much larger factor.
XAI remains a cash incinerator.
Bloomberg reported this month that XAI recorded a net loss of $1.46 billion for the September
quarter on $107 million in revenue.
Revenue did double quarter over quarter, but it will need to double four more times to
catch up with their burn rate.
Reports state that XAI spent $7.8 billion over the first three quarters of last year.
All of that means that the combined entity is valued somewhere north of an ADX revenue
multiple.
That would obviously be on the upper end for public markets, although certainly isn't
the most extreme valuation in private markets, but you got to think that now that these companies
are entering the hinterland kind of between private and public, maybe those numbers start
to make some investors nervous.
At the same time, his peers point out the numbers don't really matter.
What matters is the strength of Musk's aura.
So, let's talk about some of the themes in the conversation.
Investor Ross Gerber summed up one frequently shared skeptical take.
He tweeted, X was out of money, merged with XAI.
XAI out of money merged with SpaceX.
SpaceX out of money merged with Tesla.
When they are all out of money, dot, dot, dot, compound 248 asked how SpaceX investors
are going to feel about this.
They wrote, an AD20 split for SpaceX and XAI seems like a horrible deal for SpaceX.
Giving the number five, if that AI company with limited real revenue and Twitter, ownership
of 20% of a truly world-changing rocket company is ridiculously bad for SpaceX shareholders.
SpaceX doesn't need XAI, XAI does need SpaceX.
On the inverse side, there was a lot of meaming about how well Twitter employees have done
in all of this.
Nate McGregor posted the mom, how did we get so rich meme?
If the answer, your dad worked at Twitter, which got acquired by X, which got acquired
by XAI, which got acquired by SpaceX.
But what about people's interest in the big technology vision that Elon's presenting?
Steve Howe of Bloomberg tweets, Elon says that within two to three years, the lowest
cost way to generate AI compute will be in space.
What's the Elon to reality multiple on that estimate?
Nate Carter shared the portion of the post where Elon wrote in the long term space-based
AI is obviously the only way to scale.
To harness even a million of our sun's energy would require over a million times more energy
than our civilization currently uses.
Nick Gattid, does anyone believe this nonsense?
On the flip side are the optimists than the accelerationists.
Bef J. Zoh summed up that optimism in his post, SpaceX can turn fuel into solar energy.
Intelligence turns energy into economic value.
SpaceX and XAI is the ultimate way to turn rocket fuel into unlimited revenue.
Believe it or not, there are even some folks who are trying to navigate outside the Elon
Rorschach test to explore just how possible this is.
Andrew McCallup wrote a blog post called economics of orbital versus terrestrial data centers.
His summary in the subtitle, it might not be rational, but it might be physically possible.
It is way beyond the scope of this episode to explain, but I will include a link in the
show notes because it is a really interesting resource for exploring this more.
Now of course for many, all of this really comes down to a planned IPO.
The Kobe EC letter writes, the biggest IPO in history just got even bigger.
The finance account, meanwhile, explored what it might mean for OpenAI.
They write, did Sam Altman commit the biggest theft in world history?
Elon Musk finally got revenge today by bringing SpaceX into XAI.
All investment liquidity will be redirected from OpenAI into XAI leaving Sam with nothing.
A 5D chest move to destroy Sam and Elon made the final blow.
I'm not convinced that that's exactly how it plays out.
Even if they get XAI as a bonus, he's going to be about the uniqueness of SpaceX as a company.
I think it's far more likely that whoever goes public first between Anthropic and OpenAI
has a bigger impact on the other than a SpaceX IPO has on either of the two pure labs.
Meanwhile, as all this happens, it does feel like XAI is in the verge of some big rock updates.
Just a couple of days ago, they officially launched Rock Imagine 1.0.
The update brings 10-second video generation 720p resolution,
and as they put it dramatically better audio, and a lot of people are impressed.
Last week on the same day that notebooks started going off and Google launched Genie 3,
Swix and Layton Space argued that the bigger story was XAI launching the new state-of-the-art
image and video generation model.
We'll come back and look at that in more detail in the future,
but the point is that Elon's not just playing the financial engineering game.
It's very clear that Rock and XAI are still out to win on their own terms as well.
So how to feel about all of this?
I think on the one hand, it's incredibly easy to be cynical about something that seems
as sci-fi as data centers in space.
But Elon himself is, of course, a hugely polarizing figure right now,
which has certainly not been made less complex by the release of the Epstein files,
and yet I do think that there's a broader question here.
Do we want all of our AI efforts to be about better short-form video and more automated ad units?
Or can we still get ourselves excited about big, crazy, ambitious things,
which seems so insane that our first instinct is to plum for ridicule?
I actually think Tom Nash's simple tweet sums it up,
SpaceX and the idea of an orbital data center.
When SpaceX talks about infrastructure, I listen even if it sounds extreme.
Big systems usually start as uncomfortable ideas.
And frankly, this isn't just theoretical.
Remember when Starlink started,
its economics did not make sense at all.
Since then, the cost to launch satellites has come down 20x
and made what many believed was a total pipe dream into something economically valuable.
That is not an argument, a priority that orbital data centers are going to work.
But simply to remember, two slightly paraphrase Arthur C. Clarke,
that sufficiently advanced technology is indistinguishable
from what appears to be the batshit crazy ravings of an online lunatic.
If you're building anything with voice AI,
you need to know about assembly AI.
They've built the best speech to text and speech understanding models in the industry,
the quiet infrastructure behind products like granola, dovetail, ashby, and clueless.
Now, as I've said before, voice is one of the most important modalities of AI.
It's the most natural human interface,
and I think it's a key part of where the next wave of innovation is going to happen.
Assembly AI's models lead the field in accuracy and quality
so you can actually trust the data your product is built on.
And their speech understanding models help you go beyond transcription,
uncovering insights, identifying speakers,
and surfacing key moments automatically.
Its developer first, no contracts,
pay only for what you use and scales effortlessly.
Go to assembly AI.com, slash brief,
grab $50 in free credits,
and start building your voice AI product today.
Most companies don't struggle with ideas.
They struggle with turning them into real AI systems that deliver value.
Robots and pencils is a company built to close that gap.
They design and deliver intelligent, cloud-native systems
powered by a generative and agentic AI
with focus, speed, and clear outcomes.
Robots and pencils work in small, high-impact pods.
Engineers, strategists, designers, and applied AI specialists
working together to move from idea to production without unnecessary friction.
Powered by RoboWorks, their agentic acceleration platform,
teams deliver meaningful results including initial launches
in as little as 45 days depending on scope.
If your organization is ready to move faster, reduce complexity,
and turn AI ambition into real results,
Robots and pencils is built for that moment.
Start the conversation at robotsandpensals.com slash AI Daily Brief.
That's robotsandpensals.com slash AI Daily Brief.
Robots and pencils impact at velocity.
You've tried in IDE co-pilots.
They're fast, but they only see local silos of your code.
Leverage these tools across a large enterprise code base
and they quickly become less effective.
The fundamental constraint?
Context.
Blitzy solves this with infinite code context.
Understanding your code base down to the line level dependency
across millions of lines of code.
While co-pilots help developers write code faster,
Blitzy orchestrates thousands of agents
that reason across your full code base.
Allow Blitzy to do the heavy lifting,
delivering over 80% of every sprint
autonomously with rigorously validated code.
Blitzy provides a granular list of the remaining work
for humans to complete with their co-pilots.
Tackle feature additions, large-scale refactors,
legacy modernization, greenfield initiatives,
all 5X faster.
See the Blitzy difference at Blitzy.com.
That's B-L-I-T-Z-Y.com.
Welcome back to the AI Daily Brief.
Coming into this year,
anthropic and clawed have seemed like an unassailable juggernaut
when it comes to coding-related use cases.
It's been this way for quite some time,
all the way going back to Sonic 3.5.
And yet it's been very clear,
ever since the launch of GPT-5, frankly,
that OpenAI was not going to give up without a fight.
Sam Alman even recently recognized
that part of the reason that 5.2 wasn't very good at writing
is that they just deprioritized that use case
versus everything relating to code.
We've also, for the last two models,
5.1 and 5.2 gotten specific versions
that were optimized for the coding use case.
And now OpenAI is moving the competition away
from simply the model and into the realm of the product.
One of the things that happened
with the launch of Cloud Code about a year ago now
is that developer behavior shifted away
from graphical user interfaces and towards the terminal.
Now many people have become terminal-pilled.
They have gotten into the efficiency
of just using the command line to tell Cloud Code
what they want and having it happen
without having to mess around with specific interfaces.
There is also, frankly, I think,
a feeling of coolness factor, especially as folks
go from non-technical to semi-technical
or whatever we want to call this in-between stage,
where you're commandeering technical agents
without necessarily being super-technical yourself.
For those folks, myself included,
it feels rad to have the terminal open
and be doing things in it.
But there are also obviously limitations for that.
And a lot of things that GUI's give you access to.
Well, now OpenAI is making a bet once again
on graphical user interfaces with the release
of the Codex app from Mac OS.
They write Codex app is a powerful new interface
designed to effortlessly manage multiple agents at once,
run work in parallel, and collaborate
with agents over long-running tasks.
And the need they say comes from the natural progression
of how developers are working.
Since we launched Codex in April 2025, they write,
the way developers work with agents
has fundamentally changed.
Models are now capable of handling complex long-running tasks
and the end, and developers are now orchestrating
multiple agents across projects,
delegating work, running tasks in parallel,
and trusting agents to take on substantial projects
that can span hours, days, or weeks.
The core challenge is shifted from what agents can do
to how people can direct, supervise, and collaborate
with them at scale.
Existing IDEs and terminal-based tools
are not built to support this way of working.
This new way of building coupled with new model capabilities
demands a different kind of tool, which
is why we're introducing the Codex desktop app,
a command center for agents.
So right away, you can hear that this is not just Codex code
or ChatGPT code.
This is a bet on where the paradigm is heading.
Effectively, OpenAI is saying we've
moved from the autocomplete era to the IDE and command line
era to now the agentic and sub-agent era,
where quote unquote, developers are actually orchestrators
of agents doing a bunch of things all at once.
And with the Codex app they are arguing,
that behavior needs a new interface and a new set of tools.
Pretty much everything that they point to
is what you can do with the Codex app
is for that new approach to building.
Work with multiple agents in parallel,
seamlessly switch between tasks without losing context.
Built-in support for work trees,
so multiple agents can work on the same repo without conflicts.
Now, similar to Clawed Codework, the team behind Codex
at OpenAI shared that a lot of the code for Codex
had come from Codex itself.
Andrea and Brasino writes, today we're
introducing the Codex app, our flagship Codex experience.
Work on multiple things in parallel, extend Codex with skills
and automate repetitive tasks.
The most exciting part for us has been using the app
to build itself.
Tibo on the Codex team writes, Codex now pretty much
builds itself with the help and supervision
of a great team.
The bottleneck has shifted to being
how fast we can help and supervise the outcome.
Not that you would expect any different,
but the team and leadership at OpenAI
definitely seems excited about this.
Sam Altman tweeted, Codex app is out for Mac.
I'm surprised by how much I love it.
It's a bigger step forward than I imagined.
President Greg Brockman wrote,
I've been a die-hard terminal in EMAX user for many years,
but since using the Codex app,
going back to the terminal has felt like going back in time.
Feels like an agent-native interface for building.
And other people agree, prominent AI encoding YouTuber
content creator Theo dropped a 22-minute video
about how much he loved the product.
First, caveatting all the reasons
why he had to dislike it.
For him, the comparison is clearly not just
to Cloud Code, but also to Cursor,
with him even calling it a cursor killer.
Nick Farina writes,
yeah, the new Codex app is the best UI for AI-assisted coding
that I've used so far.
It's incredibly intuitive and manages to provide a ton
of features that reveal an unfold naturally
as you use the product.
Suix and the team at late in space said,
we almost did not give OpenAI the title story today.
XAI technically got acquired by SpaceX,
and after all, it's just air quotes,
a desktop app UI for the already existing CLI
and cloud app and VS Code extension,
and its quote-unquote just OpenAI's version of Conductor
and Codex Monitor and Antigravities Inbox.
In December, they pointed out
that the integrated developer environment would die,
and here we are in 2026 day, right?
An OpenAI, which once offered $3 billion for Windsurf,
is out here shipping a coding agent UX
that is not a VS Code fork.
Bears some thought on truly how far coding models
have come that serious coding apps are shipping
without an IDE.
There was a time when app that lets you write
English and build without looking at code
was equivalent to vibe coding.
But these non-technical audiences are not the ICP for Codex.
This is very seriously marketed at developers
who historically love code and identify
strongly with handwriting every line of code.
Now, OpenAI is saying, looking at code is kind of optional.
They also say that Codex's reliance
on multitasking and work trees is, quote, in hindsight,
the perfect natural UI response
to the increase in agent autonomy.
The team at every did one of their classic vibe checks
and basically found Codex was good.
In perhaps understated fashion, their headline says,
OpenAI's Codex app gains ground on Clawed Code.
But if you dig in more, there's definitely
a lot of behavior shift here.
Every founder, Dan Shipper writes,
previously I was using Clawed Code 80% of the time
and Codex 20% of the time.
Over the last few weeks in the app,
that percentage has become 50-50.
For large production apps, Codex is slower
but smarter and more reliable than Clawed Code.
Opus 4.5 is still my daily driver for the rest of my work
and for programming tasks that require taste, empathy,
and speed, but the reversal is significant.
Indeed, most of the team at every gave it a green
or psyched about this release rating.
And in fact, it seems like the only reason
that they didn't market is more significant
is that they echoed what swix in the team at latent space
said, saying that this is built for hardcore engineering.
And honestly, for the first time in a very long time,
you're starting to see some chinks and entropics armor.
On Sunday, Euchen Gin wrote,
the creator of Claudebot slash moltbot slash open claw,
Peter Steinberger pushes 144 commits per day on average.
Pre-AI, this was impossible.
He ships codes he never reads.
He's a conductor.
GBT and Claude are his orchestra,
five to 10 AI agents run in parallel under his command.
One person is now an army.
Peter bit back kind of harshly.
I don't let Claude code on my code base.
It's all Codex would be too buggy with Opus.
Now, for Peter, this is not just a brand new opinion.
This is something he's thought for a while,
but more people are taking notice of it
in part because of the success of OpenClaw.
Daniel Hassan points out that even with just the early reactions,
it does seem like interface makes a difference.
It's interesting, he writes, how the Codex app
has increased people's usage of Codex.
The same model exposed under a different interface
can make it so much more useful.
Now, a couple more features that I think are worth noting.
First of all, they have something called automations,
which is basically a cron job or a scheduled task.
At OpenAI, they say we've been using automations
to handle the repetitive but important tasks
like daily issue triage, finding and summarizing CI failures,
generating daily release briefs, checking for bugs and more.
There's also a big emphasis on skills,
and you begin to see that OpenAI is clearly not just
thinking about code generation,
but also the way that code generation
leads to everything else.
Or as someone recently put it, code AGI is functional AGI.
In any case, the blog announcement writes, Codex
is evolving from an agent that writes code into one
that uses code to get work done on your computer.
With skills, you can easily extend Codex
beyond code generation to tasks that require gathering
and synthesizing information, problem solving, writing
and more.
The anonymous I rule the world account on Twitter writes,
the pattern keeps repeating.
Cloud code dropped and everyone filed it
under DevTools for months.
Then people realized it was a general purpose computer agent
wearing a coding hat.
Codex just launched a proper desktop app, same energy.
It's not a coding tool, it's an agent command center.
Parallel threads across projects, browser automation,
skills library, 30 minute autonomous runs.
The coding angle is just the entry point.
If you're waiting for someone to explicitly announce,
this is the everything agent, you're going to keep being
late.
And when you go back and read the actual words that open AGI
is saying with that in mind, there's certainly
a lot of evidence for that.
The first sentence of the introduction of the Codex app
calls it a powerful new interface designed
to effortlessly manage multiple agents at once,
run work in parallel, and collaborate
with agents over long running tasks.
The word code is implied, but not present in any part of that.
Many people think they're likely to see all of these
features shift over to everything very soon.
Cresinco writes, the features will easily
be copied to all other agent managers and weeks.
Gavin Purcell writes, curious how all the vibe code startups
are doing with the Cloud Code explosion
and now a very user-friendly codex app.
I know some people are cursor hard cores,
but overall, the big seem to be eating the entire space.
One thing that's for sure, whatever model people are using,
is that software engineers are settling
into a very different reality.
Also, yesterday, Sam Altman tweeted,
I'm very excited about AI, but to go off script for a minute,
I built an app with Codex last week.
It was very fun, then I started asking it
for ideas for new features, and at least a couple of them
were better than I was thinking of.
I felt a little useless and it was sad.
I'm sure we will figure out much better
and more interesting ways to spend our time
and amazing new ways to be useful to each other,
but I'm feeling nostalgic for the present.
Signal reposted that and said,
Nick St. Pierre echoed something similar.
Big identity crisis in many engineering circles right now,
he writes, people who've historically considered themselves
builders, now realizing they aren't the ones
building anything anymore, AI is.
The moral superiority of I build things
you just talk mentality is irrelevant now
that the coding language is English
and anyone can build things by talking.
The skills that made them so economically valuable
are almost fully commoditized,
and they're being forced to adopt a new identity.
An identity most of them despise
and have mocked their entire careers.
To remain relevant, they must become the idea guy.
Signal meanwhile has put this consideration
on an even larger scale,
re-sharing that post from Sam, they write,
AI is going to sever the deepest identity loop in the West.
IE, who you are, is roughly equal to what you do for money.
For centuries, trade was dignity,
meaning social rank and even morality.
Capitalism wielded cognition to occupation.
What do you do, became shorthand for what are you worth?
Now machines are almost done encroaching this
almost sacred layer for almost all white collar work.
The next few years will be like watching God
being forced to retire in real time.
Little bit heady for this episode,
but interesting that this is the conversation.
I think the big takeaways right now,
are first that open AI is making a bet
on a different type of interface being necessary
for a parallel agent swarm type of future.
And second, for the first time in a long time,
they're actually winning back some conversation equity
and mind share from Anthropic
when it comes to the set of use cases.
Alas, that may be very short lived.
All signs continue to point to Claude Sonnet 5 coming very soon.
In fact, perhaps even before you listen to this show,
in between when I finish recording and when it goes live,
so we might be once again in a whole different paradigm
of capabilities in just a few short hours.
For now, that is going to do it for this episode
of the AI Daily Brief.
Until next time, be safe and take care of each other.
Peace.

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis