0:00
Today on the AI Daily Brief, the skills we need to develop for the code AGI era.
0:05
The AI Daily Brief is a daily podcast and video about the most important news and discussions
0:15
First of all, today's episode is brought to you by Zencoder, robots and pencils, and super
0:20
To get an ad-free version of the show, go to patreon.com, such AI Daily Brief, or you can subscribe
0:25
If you are interested in sponsoring the show, you can get all of that information at AIDailyBreath.ai
0:29
and of course, while you are at AIDailyBreath.ai, you can find out all about the other things
0:33
we have going on, including our new operators community, the New Year's AI Resolution Program,
0:39
We've got some big announcements coming soon about that, and you can get all of that information
0:43
from AIDailyBreath.ai.
0:46
Now with that out of the way, let's dive in.
0:48
Today we are talking about the skills necessary for the new code AGI era.
0:54
Now if you've been following along, you'll know that my sense is that we have made a fundamental
0:57
shift recently, that the combination of the set of models that were released at the
1:01
end of last year, Gemini 3, GPT 5.2, and especially Opus 4.5, in combination with tools like
1:06
Cloud Code and the vibe coding platforms like Replet Unlovable, have put us into a fundamentally
1:11
new place when it comes to AI.
1:13
Someone who's been thinking about this a lot is Nathan Lambert.
1:15
A couple of weeks ago, he wrote an essay called Cloud Code Hits Different.
1:20
He writes having used coding agents extensively for the past six to nine months, there was
1:24
some meaningful jump over the last few weeks.
1:27
He points to a tweet from Sergei Karyev that in his estimation captured the shift.
1:32
Sergei tweeted, Cloud Code with Opus 4.5 is a watershed moment, moving software creation
1:36
from an artisanal craftsman activity to a true industrial process.
1:40
It's the Gutenberg Press, the sewing machine, the photo camera.
1:44
Nathan for his part writes, the joy and excitement I feel when using this latest model in Cloud
1:48
Code is so simple that it necessitates writing about.
1:52
It feels right in line with trying Chatchy B.T. for the first time, or realizing O3 could
1:56
find any information I was looking for, but in an entirely new direction.
2:00
This time, it is the commodification of building, I type and outputs are constructed directly.
2:05
The fact that Cloud Code makes people want to go back to it is going to create new ways
2:09
of working with these models, and software engineering is going to look very different
2:14
Right now, Cloud and other models can replicate the most used software fairly easily, where
2:18
in a weird spot where I'd guess they can add features to fairly complex applications
2:21
like Slack, but there are a lot of hoops to jump through in landing the feature.
2:25
The models are way easier to use when building from scratch than in production code bases.
2:29
This dynamic amplifies the transition and power shift of software, where countless people
2:32
who have never fully built something with code before can get more value out of it.
2:36
It will rebalance the software and tech industry to favor small organizations and startups,
2:40
like Nathan says his startup interconnects, that have flexibility and can build from scratch
2:44
in new repositories designed for AI agents.
2:47
It's an era to be first defined by bespoke software, rather than a handful of mega-products
2:51
used across the world.
2:53
The list of what's commoditized is growing in scope and complexity fast.
2:57
Website front-ends, many applications on any platform, data analysis tools, all without
3:01
having to know how to write code.
3:03
I expect mental barriers people have about clouds ability to handle complex code bases
3:07
to come crashing down throughout the year, as more and more cloud-pilled engineers just
3:11
tell their friends skill issue.
3:13
There are things cloud can't do well and will take longer to solve, but these are more
3:16
like corner cases, and for most people, immense value can be built around these blockers.
3:21
So that was his initial essay, however, he's gone back to the well to get out what I think
3:25
isn't even more important questions with his most recent, which he called Get Good
3:31
Earlier this week, I did a presentation for one of the world's largest asset managers.
3:34
It's a company that has tens of thousands of employees, tens of billions of revenue,
3:38
and trillions in assets under management.
3:40
I called the presentation AGI Incorporated, and the theme of it was trying to articulate
3:45
and ground this change that Nathan was writing about and that we've all been experiencing.
3:49
The question that the leadership in the room had was what are the necessary skills for
3:55
How much is it technical and how much is it something else?
3:58
So what we're going to do with the rest of this episode is read Nathan's latest essay Get
4:02
Good at Agents and talk about the skills shift that I feel is coming right now.
4:07
Nathan is recognizing I think something that many people are feeling, which is that without
4:11
anyone asking, many of us are finding ourselves naturally trying to adapt to the capabilities
4:16
of agents rather than trying to adapt them to ourselves.
4:20
In his essay called Get Good at Agents, Nathan writes, two weeks ago, I wrote a review of
4:24
how Cloud Code is taking the AI world by storm, saying that software engineering is going
4:28
to look very different by the end of 2026.
4:31
That article captured the power of Cloud as a tool in a product, but it undersold the
4:34
changes that are coming in how we use these products in careers that interface with software.
4:38
The more personal angle was how I'd rather do my work if it fits the Cloud form factor,
4:43
and soon I'll modify my approaches so that Cloud will be able to help.
4:46
Since writing that, I'm stuck with a growing sense that taking my approach to work from
4:49
the last few years and applying it to working with agents is fundamentally wrong.
4:53
Today's habits in the age of agents would limit the uplift I'd get by micromanaging them
4:57
too much, tiring myself out, and setting the agents on two small of tasks.
5:01
What would be better is more open-ended, more ambitious, and more asynchronous.
5:05
I don't know yet what to prescribe myself, but I know the direction to go, and I know
5:09
that searching is my job.
5:11
It seems like the direction will involve working less, spending more time cultivating
5:14
peace so the brain can do its best directing, let the agents do most of the hard work.
5:20
Since trying Cloud Code with Opus 4.5, my work life has shifted closer to trying to adapt
5:24
to a new way of working with agents.
5:26
This new style of work feels like a larger shift than the era of learning to work with
5:30
chat-based AI assistants.
5:32
Chat GPT let me instantly get relevant information or a potential solution to the problems I was
5:37
Cloud Code has me considering what should I work on now that I know I can have AI independently
5:42
solve or implement many subcomponents.
5:44
Every engineer needs to learn how to design systems.
5:47
Every researcher needs to learn how to run a lab.
5:50
Agents push the humans up the org chart.
5:52
I feel like I have an advantage by being early to this wave, but no longer feel like just
5:55
working hard will be a lasting edge.
5:58
When I can have multiple agents working productively in parallel on my projects, my role is shifting
6:02
more to pointing the army rather than using the power tool.
6:06
Pointing the agents more effectively is far more useful than me spending a few more hours
6:09
grinding on a problem.
6:11
The feeling that I can't shake is a deep urgency to move my agents from working on toy software
6:15
to doing meaningful long-term tasks.
6:17
We know Cloud can do hours, days, or weeks of fun work for us, but how do we stack these
6:21
bricks into coherent long-term projects?
6:23
This is the crucial skill for the next era of work.
6:27
There are no hints or guides on working with agents at the frontier.
6:30
The only way is to play with them, instead of using them for cleanup, give them one
6:33
of your hardest tasks and see what it gets stuck on.
6:36
See what you can use it for.
6:38
Software is becoming free.
6:39
The decision-making in research, design, and product has never been so valuable.
6:44
Being good at using AI today is a better mode than working hard.
6:49
In Nathan's essay, we can clearly see him grappling, with his own shift in how he works and
6:54
the new skill sets that feel proportionally more valuable.
6:57
But I wanted to expand this and make it more generalizable.
7:00
I think many of us, in fact basically everyone who's fully taking advantage of these tools,
7:05
is going to have to check ourselves against this new set of skills that's required, and
7:09
to what are the actual skills?
7:11
This is probably overly reductive, but let's break them into two categories.
7:15
The agent manager and the enterprise operator.
7:19
The agent manager is all about knowing how to work with agents effectively.
7:22
The enterprise operator is about knowing what to work on and why.
7:25
The superpowers, of course, going to be for people who have both of these.
7:29
Let's talk first about the side that Nathan was exploring, the agent manager.
7:33
The goal, of course, is to direct agents from maximum output.
7:36
Now in many ways, software engineers are ahead of the curve on thinking about this shift,
7:40
moving from executor to director, from wielding the tool to pointing the army.
7:45
It's more about systems about defining the parameters about getting leverage via direction.
7:49
Specifically, some of the skills, many of these would show up in Nathan's piece,
7:53
include systems design thinking,
7:55
i.e. thinking about how to architect coherent holes,
7:58
rather than simply implementing individual components, task scoping,
8:02
and specifically ambitious task scoping,
8:04
how to give agents meaningful end to end work, not just small cleanup tasks.
8:13
If you're using AI to code, ask yourself,
8:16
are you building software or are you just playing prompt roulette?
8:19
We know that unstructured prompting works at first,
8:22
but eventually it leads to AI slop and technical debt.
8:25
Enter Zenflow. Zenflow takes you from vibe coding to AI first engineering.
8:30
It's the first AI orchestration layer that brings discipline to the chaos.
8:33
It transforms freeform prompting into spec-driven workflows and multi-agent verification,
8:38
where agents actually cross-check each other to prevent drift.
8:41
You can even command a fleet of parallel agents to implement features and fix bugs simultaneously.
8:46
We've seen teams accelerate delivery to x to 10x.
8:50
Stop gambling with prompts.
8:51
Start orchestrating your AI.
8:53
Turn raw speed into reliable production grade output at zenflow.free.
9:00
Today's episode is brought to you by robots and pencils, a company that is growing fast.
9:05
Their work as a high-growth AWS and Databricks partner means that they're looking for elite talent
9:09
ready to create real impact at velocity.
9:12
Their teams are made up of AI native engineers, strategists and designers who love solving hard
9:17
problems and pushing how AI shows up in real products.
9:20
They move quickly using robot works, their agentic acceleration platform,
9:24
so teams can deliver meaningful outcomes in weeks, not months.
9:27
They don't build big teams, they build high-impact nimble ones.
9:30
The people there are wicked smart with patents, published research,
9:34
and work that's helped shape entire categories.
9:36
They work in velocity pods and studios that stay focused and move with intent.
9:40
If you're ready for career-defining work with peers who challenge you and have your back,
9:44
robots and pencils is the place.
9:46
Explore open roles at robotsandpensals.com slash careers.
9:49
That's robotsandpensals.com slash careers.
9:53
Today's episode is brought to you by Super Intelligent.
9:56
Super Intelligent is a platform that very simply put is all about helping your company figure out
10:00
how to use AI better. We deploy voice agents to interview people across your company,
10:04
combine that with proprietary intelligence about what's working for other companies,
10:07
and give you a set of recommendations around use cases, change management initiatives,
10:11
that add up to an AI roadmap that can help you get value out of AI for your company.
10:15
But now we want to empower the folks inside your team who are responsible for that transformation
10:19
with an even more direct platform.
10:21
Our forthcoming AI strategy compass tool is ready to start to be tested.
10:25
This is a power tool for anyone who is responsible for AI adoption or AI transformation
10:30
inside their companies. It's going to allow you to do a lot of the things that we do at
10:33
Super Intelligent, but in a much more automated, self-managed way, and with a totally different
10:38
cost structure. If you're interested in checking it out, go to AIdailybrief.ai slash compass,
10:42
fill out the form and we will be in touch soon.
10:49
We haven't done a full show on it, but if you've been hearing about Ralph Wiggum as an AI
10:54
strategy, it's kind of all about this. It's about breaking a big task into a bunch of small
10:58
tasks in a way that agents can work for much longer when you're not there. And indeed,
11:02
that gets into some of these other key skills, long horizon projects where you stack short-term
11:06
outputs into coherent, durable long-term projects, and asynchronous work management,
11:11
where you figure out how to orchestrate work that runs in the background without real-time monitoring.
11:15
One of the sentiments that you'll hear right now, which I personally feel kind of acutely,
11:19
is a particular type of anxiety of not having deployed agents to work on something in the
11:23
background while you were doing some other type of work. I just finished this presentation I
11:28
mentioned before, and if I had done a little bit more pre-work, I could have had agents building
11:32
something while I was talking to this group of leaders. There are also some other skills.
11:36
Prompt architecture is kind of a part of that task scoping in async work management,
11:40
validating output at scale without having to review every line manually is going to be a whole new
11:44
field in discipline. And of course, there's multi-model orchestration where you need to know which AI
11:48
tool or model to deploy for specific types of tasks, but I think that really the big ones are
11:52
about async work management and systems design thinking so that you can effectively deploy not
11:56
an agent but an army of agents. This is however only half of the skills for the AGI code era.
12:03
The other will call the enterprise operator, and of course this doesn't have to mean large
12:07
enterprises, but it's about the business side. When the group asked me if I thought that the skills
12:12
for this new era were primarily technical or about something else like domain expertise,
12:17
I said that for many of them it is going to be about a re-application of some key operator skills
12:21
inside the enterprise right now. The core mindset shift from this enterprise operator perspective
12:27
is that execution used to be expensive. It is now cheap, it is now abundant,
12:33
anything that I think of I can build, and I can do it pretty darn quickly. That means selection
12:38
becomes the scarce resource, knowing what to execute is the key thing. Opportunity recognition,
12:44
strategic alignment, and outcome definitions become the core parts of the enterprise operator.
12:50
Let's expand the skillset a little bit. One area which I really don't think we should overlook
12:55
is domain expertise. If 2025 has shown anything, it's that the pejoratively named AI wrapper
13:01
startups actually understood something significant, which is that different industries and different
13:05
functions have particular attributes, which require modification from the core interface of the
13:11
chatbot. And even if you are using the same model, knowing what sort of processes AI is going to
13:16
intersect with, knowing what types of data sources it's going to need to have access to,
13:21
and building interfaces around that type of domain expertise can be extremely valuable.
13:26
One need only look at the valuation of a company like Harvey or Open Enterprise to understand that.
13:30
Domain expertise is, in other words, extremely valuable, even and especially in this world,
13:36
of code AGI. Having knowledge of the way that work happens in a particular domain be it a
13:42
function or an industry, understanding the problems and the constraints within that specific field,
13:47
which could be anything from governance to compliance regimes to data set challenges,
13:52
is going to be absolutely key. And even more key in some ways than before,
13:56
when you are having to think in systems terms, you need that wide ranging view that only domain
14:02
experts are going to have. Now, this actually brings up another challenge, which is one that could
14:07
get more apparent, especially in the medium term, which is that the more that current domain experts
14:12
use agents to do everything, the less of a pipeline to expanding that domain expertise to new people
14:17
in the form of mentorship and junior employees, the less they spend time distributing that domain
14:21
expertise to younger employees in the form of mentorship. We can't take on every problem at once,
14:26
so we'll skip that one for now, but it is something that I think organizations will start to recognize.
14:31
Okay, so you've got domain expertise, but another key skill of the enterprise operator is problem
14:35
recognition, and problem recognition is not just an understanding where there are challenges or
14:42
workflow frictions. It's being able to reinterpret those problems as solvable software problems.
14:48
This is, in and of itself, a major mindset shift. I started vibe coding at the beginning of last
14:53
year, as these tools all came out and we started calling it vibe coding. I dabbled with it since
14:57
the very beginning of chat GPT, although it was a lot harder than, and yet it was only at the
15:01
very end of last year that I started finding myself actively asking when I came across any problem
15:06
or challenge, could I use software to solve this? That is going to be an entirely new muscle that
15:12
enterprise operators have to develop. And so problem recognition is actually a bunch of different
15:16
things at once. Enterprise operators also need to have AI possibility awareness. They need to
15:21
understand what is actually feasible to build with current agenda capabilities. This is an entire
15:27
discipline in and of itself and why we have companies that are exclusively focused on exactly this.
15:32
Related of course is the problem solution fit and being able to connect AI possibility awareness
15:36
with problem recognition. A really big skill for the enterprise operator is unstated constraints.
15:43
Part of what makes applying AI to enterprises so challenging are these unstated constraints.
15:49
Think about institutional knowledge compliance requirements specific stakeholder dynamics.
15:54
These are things that aren't necessarily written down anywhere. Remember, people have been
15:58
exploring this new concept of the context graph, which is all about the why instead of the what.
16:02
The context graph is not about the CRM entry that shows that we gave a company a 20% discount,
16:07
but an explanation of why we gave it a 20% discount when the stated policy is to give no more than
16:12
a 10% discount. Unstated constraints are another missing set of information and missing set of
16:17
context that lives inside the enterprise operator. In parallel to the agent manager's output
16:23
verification, there is a version of that for enterprise operators as well, where these enterprise
16:27
operators need to be able to recognize whether AI output is actually correct within the context
16:31
of the particular domain. This is of course going to be extremely important if we want new processes
16:36
to replace the old, which by the way is yet one more key skill of the enterprise operator,
16:40
which is process redesign. One of the soapboxy things that you sometimes probably hear me talk
16:45
about on the show is about why I think it's a very, and I'll generously call it an intermediate
16:49
strategy to try to have AI agents watch what humans do document that process so they can copy it.
16:55
It is quite clear I think that agents are going to find different and probably more efficient ways
16:59
to do things than their human counterparts, and a key skill of the enterprise operator is going
17:04
to be rethinking entire workflows from scratch and letting new workflows replace the old.
17:09
Now one thing that's on neither of these, but is maybe just an overarching mindset shift,
17:14
is moving from seeking perfection on the front side to iterating on the back side.
17:19
In other words, one of the implications of having the cost of execution come down is simply
17:23
that we can try more solutions, that puts a premium on iteration and adaptive learning as opposed
17:28
to preparation and planning. It's not a strict one to one shift as you see a lot of these skills
17:33
are about planning, but overall we're going to run processes and learn from our mistakes
17:37
much more quickly than we have in the past. We've talked a lot recently about the AI capability
17:42
overhang, the gap between what AI can do and what we're getting out of it. This gap is set to
17:47
absolutely explode in the code AI era and to bring adoption and capability closer together,
17:52
it is going to take not just agent management skills and not just enterprise operator skills,
17:56
but a combination of both. If you are an individual who can do both of these things,
18:01
you are simply put going to be the most in demand individual in the world. But if you are thinking
18:06
about the system of your organization, it's about how you allow all of your people to operate more
18:11
in both of these ways. At some point we'll do a whole separate show about how I think organizations
18:15
should be thinking about upskilling in this particular era. But for now, hopefully this is
18:19
a bit of a blueprint for thinking about skills for the AI code era in a different way.
18:24
That's going to do it for today's AI Daily Brief. Appreciate you listening or watching,
18:27
as always, and until next time, peace!