Loading...
Loading...

Learning AI is no longer about tutorials, courses, or step-by-step guides. It’s about working with AI as a learning and building partner. In this AI Operators bonus episode, NLW breaks down the mindset shifts and practical tactics needed to learn faster by pairing directly with models—covering vision-first thinking, messy exploration, productive pushback, handoff documents, prompt chaining, and when to stop or reset a thread. The core idea is simple: high-agency learners can access frontier-level capabilities right now if they learn how to collaborate with AI effectively.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Interested in sponsoring the show? [email protected]
Today on this AI operators bonus episode of the AI Daily Brief,
we're talking about how to learn AI with AI.
The AI Daily Brief is a daily podcast in video
about the most important news and discussions in AI.
All right friends, we are back with another unplanned
AI operators bonus episode.
For those of you who are new around here,
these operator bonus episodes are not anywhere near our normal format.
They're not about the news, they're not about a discourse,
they're not about a big idea necessarily.
They are instead much more practical.
And specifically for people who are trying to figure out how to use AI.
I toyed with the idea of actually spinning out a separate AI operators podcast this year
and decided at least for now to drop these bonus episodes in the feed
sometimes when it made sense.
And so I'm always interested in hearing your feedback on whether these things are valuable,
whether you want more of them, whether you think they should be on their own feed or anything else.
And what we're trying to do here is talk about how to learn AI,
specifically we're talking about how to learn AI with AI.
But the Genesis for this is that I think that the way that learning is going to happen
has fundamentally shifted.
Instead of a paradigm of instructor led tutorials, explainer videos, step-by-step guides,
basically that entire former paradigm of education and particularly online education,
instead now everything is going to be effectively the equivalent of
pair learning with an AI build partner.
AI in other words is going to be your companion for using AI to learn.
And it turns out there's a lot to figure out about how to do that well.
Now I want to give a little bit of specific context and why this is coming up right now.
First and most important is that just after OpenAI announced 5.3 Codex,
President Greg Brockman talked about how the company was endeavoring to work in a
fundamentally different way. He tweeted, by March 31st, we're aiming that for any technical task,
the tool of first resort for humans is interacting with an agent rather than using an editor
or terminal. In other words, agent first work by March 31st.
Well you might have noticed that has kind of a ring to it.
And something I've been thinking about a lot recently anyways is how to give people better
resources for self-directed learning around what I see as this shifted paradigm of AI.
Already we weren't doing such a good job of helping people learn how to use AI,
and that was before this CodeAGI moment that we've experienced over the last couple of months.
Now everything is shifting once again, and while the ceiling of what you can achieve
has heightened dramatically, so too has the difficulty of using the tools to get there.
Now I had already wanted to expand what we built for the AI DB New Year's resolution program
into a broader free self-directed learning platform, but this just sped up the timeline.
Okay, so we've got this idea of agent first work by March 31st,
but the other catalyst for this actually came from a discussion on a post by Tribe CEO Jacqueline
Rice Nelson. Now to be clear Jacqueline is great, Tribe does awesome work, and the broader point
of her post, which is that the UI and the products around agents need to improve dramatically for
them to be widely adopted, especially in a work or enterprise setting, I absolutely agree with.
Her post is about how a bunch of her non-technical team members in herself had used
Claude cowork to do things that were impossible for them just a few months ago.
But when they actually dug in, it was quite difficult, and in fact many of the team had actually
paired with engineers for hours to get the output that they eventually got.
The line that got me was this one. What cowork really shows us is what many of us already knew.
Claude code is incredible, we're seeing a glimpse of the future where these capabilities
will be available for everyone, but the future isn't here quite yet.
The part that got me to bristle was this part, the capabilities will be available for everyone.
My contention is that for anyone who is high agency enough to take the time to work through
these challenges, the capabilities are available right now. What's more, the people who take the time
to take advantage of these capabilities being available right now, difficult though they may be,
are going to be the people who shape the next generation of work in the economy.
And in my mind, the foundational mindset shift for being able to take advantage of those new
capabilities is to stop looking for tutorials or videos or explainers and fully embrace the idea
of AI itself as you're learning and build partner. Now, I've been living fully in this reality
for a couple of months now. I have dozens of live projects on Lovable, a bunch of things that I'm
building in Claude code, seven agents that are actively interacting with me via open
claw that I built over the past week, and I don't know how to code. I am completely and utterly
non-technical. What I have is Claude to help me work through things step-by-step, figure them out,
and persevere even through challenges that might otherwise have stopped me.
But I realized that how to work with an AI learning partner like that is not self-evident.
And so as I was working on these projects today, I actually asked Claude to extract some of the
lessons that we had figured out as I had worked with it over the past couple of months.
And the rest of this episode is about those tips. So we, the Royal We, me and Claude,
have broken it into two categories. The first is mindset shifts, and the second is specific tactics.
I'm going to go through them kind of fast, but hopefully this provides a way to think about how to
dive in to using Claude or Chatchy BT or Gemini or whatever your preferred LLM is as your learning
and build partner. Okay, so mindset tips first. Number one, you got to start with the vision
of the task. The watchword for AI in 2026 is of course context. And when it comes to building like
this, the context that the AI needs is the big idea of what you're trying to achieve. That means
instead of saying help me build a learning platform to help people launch their first agents,
instead you start with your goals and your perception of what does or doesn't exist out there
and what the challenges are. It might feel slow, but I guarantee it's going to save time on the
other end and get your AI partner way closer to what you're trying to actually achieve than just
trying to describe the outcome alone. Now in some cases you might even not know exactly what you're
trying to achieve or not fully, which brings us to tip two, which is thinking out loud even when
it's messy. One of the things that I realized earlier today is that I was actually building two
things at once. One was a set of self directed skills projects that people could combine in whatever
way that made sense. The second was a library of agent starter prompts that people could just
download. I think the exact line to Claude was okay not to be insane, but am I building two things
at once? Your AI partner has the capability to handle that sort of messiness. It doesn't need
perfectly form thoughts to be useful. In fact, much of its utility is in helping you think through
half form thoughts. Number three, and this one could be hard to get used to for some. You got
a pushback hard and often. AI doesn't have feelings in the way that your employees or colleagues do.
To the extent that it wants anything, it wants to help you achieve whatever it is that you're setting
out to do. And one of the things that anyone who's used AI knows is that it says everything
pretty confidently, which means you have to push back. Now an inverse of this, which is a little
bit better with current models, but which is still a little bit of a challenge, is you also want
AI to push back on you. And sometimes that involves explicitly saying, I'm not sure about this,
I want you to critique it from first principles or something like that. The point is that the
conversation can't be the AI just accepting your ideas as good or you accepting the AI's ideas
as good. You got to push back on each other to make progress. Number four, and honestly, this
probably could have gone with think out loud is sort of a subset of messy thinking out loud,
which is dumb first organized later. Once again, you don't need to have everything perfectly structured.
And in fact, a lot of what AI is good at is taking your messy, disorganized and unstructured
thoughts and structuring it in ways that can help you make progress. Number five, AI partner as
mirror. Sometimes you need the AI to generate a net new idea. A lot of times you need to speak
an idea to it and have it play it back for you to make sure it makes sense. And the lesson here
is that you know more than you think you do. You don't have to rely on the AI for all the new
ideas. A lot of its job is to help you work through your own. An example from our building earlier
today, we were trying to talk through what the categories would be for the agents on the agent
bench portal where you can download this starter template to start working through building your
own agents. And after the AI gave me a set of categories that it thought made sense, I fed it
back the seven that I had built with open claw this week to see how they would fit together.
It ended up revealing a couple of gaps in the framework that I didn't consciously catch,
but which something else that I had built actually revealed. Number six, I would summarize as
get existential. Every once in a while, especially the deeper into the weeds you go, it's really
valuable to zoom all the way out and reground yourself in what you're actually trying to build.
I can't even tell you how many different versions of AIDB training have existed in my head.
And even as I've been designing this project, it's shifted. It is very, very easy to get lost in
the sauce and knee deep in the weeds. And to the extent that you can pull yourself out every once
in a while, it's going to help you and your AI build partner reground yourself in what you're
actually trying to accomplish. Number seven, this is another one that I think is
sneakily difficult for people because it's not how we're used to thinking about things.
I think a lot of the first way that people used AI was they drafted stuff and then they had the
AI comment. I think increasingly we're shifting in the other direction where the flow that makes
the most sense is to let the AI draft and then to react. Take advantage of that near infinite
output capacity to go wide first. Today as I was thinking through what skills projects I would
want to have to start off as the AI to write a slew of initial titles based on our categories
and it came back with 110 in about 30 seconds. I was able to very quickly spot that there were
certain patterns and trends inside those that weren't really going to work and we went on from
there. A last really important mindset shift is to know when to stop a thread and move on.
The AI will walk with you down any rabbit hole as far as you want to go and pretty much the only
time that you hear from an AI something like hey do you think we should move on is when it's coming
to the end of its context window and that's its equivalent of telling you that it's tired.
In general it will happily go as deep as you want on just about anything which means it's your
job to manage the session and decide what matters now versus what matters later. You also are allowed
to temporarily diverge and then come back. I had a tangent that I knew as a tangent that I just
wanted to think through in the form of a single question and then after I got the answer to that
question I said let's willfully ignore that for now we've got enough things to think through.
Remember at the end of the day you are the project manager of the conversation your AI
partner is going to follow you wherever you lead. Now let's move from the mindset shifts and
thinking about how to interact with your AI learning slash building partner and shift instead
to some tactics. The first and maybe singularly most useful thing in anything that I'll say today
is handoff documents. AI conversations have limits long sessions accumulate a lot of shared
understanding that exists only in that current conversation. If you don't capture it you start
from zero next time. Yes all of the platforms have some version of memory but it's very nascent
it's very unreliable and it requires you at least at this moment. This is the type of thing that
in three months when someone's listening to this could be entirely irrelevant but at least in this
moment you have to explicitly capture the context before you move to a new conversation.
What you will find is that when you start to get into a complex project it will not take you as
long as you think to get to the end of the context window. Before you do or especially as it's
starting to happen as you start to see those telltale signs where the AI forgets a detail or
starts to get lazier whatever it is it just feels like you've been talking to it for a long time
the little scroll bar on the side has gotten little and tiny because there's so much there.
Ask it to write a handoff document that captures the key themes the decisions the open
questions and the current state of the project whatever type of project it is.
You have to kind of treat every working session like a shift handoff you document what was decided
and in many cases the process that got you there because that's really important context too
as well as what's to open and what comes next. To additionally support that persistent context
use whatever your LLM's version of a project setup you can. So for example you saw here that I'm
using cloud projects I have projects for major buckets of work each of them has their own set of
conversations as well as their own set of files and you can see this is the open claw agent project
that a lot of the files are handoff plans setup plans architectural plans basically the additional
context that future instances of that LLM are going to need to continue to help me without
losing too much in translation. All of the big LLMs have some version of this at this point
so whatever you're using put it into whatever that version of a project is.
This next one is kind of obvious but for the sake of completeness don't forget that all these
models they can look at stuff in the form of screenshots and while screenshots are obviously
useful for if you're working on anything visual like a design or a layout you can also screenshot
an error message a snippet of code in short remember that your AI learn slash build partner can read
stuff in an image as easily as it can when you copy paste it in especially as I've been setting
up open claw my conversations with my cloud partner are basically nothing but screenshots of the
terminal where I'm effectively saying what the heck does this even mean related to that get out
of the habit of paraphrasing just copy and paste stuff when you're talking about error messages
parts of a UI you don't like a code snippet a paragraph from a document do not paraphrase it
don't summarize it especially if it's a technical problem your AI partner can work with exact
content far better than it can with your memory of it as insane as the sounds copy paste is a core skill
of learning to learn with AI. Next up one of the things that you will find as you dig in especially
as you get more complex is that you're going to be bouncing around between a lot of different
AIs you've got your AI build slash learn partner who's helping you coordinate the whole thing
you might have a different LLM that's helping you with some other parts for example if you're
using clod it doesn't have image generation so you might be using Gemini's nanobin and a pro
and then of course if you're using a build tool like cloud code or lovable or replicate or anything
else like that you're going to be moving context and content around between a lot of different AIs
use your AI partner to write the prompts for your other AI partners get in the habit of explaining
to your AI partner what you want one of the other AIs to do and have it write the spec or the
prompt or whatever it is that's needed to communicate that not only is it going to be more precise
it's going to be a lot faster to have it do the writing now the one caveat proviso addendum to this
is that as you do a lot of this it can get very easy to just assume that what your AI learn partner
has written is correct and fully representative of what you're trying to communicate take the time
to click the file that it wrote scroll through it and make sure that it accurately says what you
want it to you and not believe the number of times for example this week that it tried to switch
the models that I was using on me when initially at least I just wanted everything in opus
now I probably knew better and I probably could have been saving money using son at 4.5 but
that's neither here nor there overall the point is use your AI partner to write the prompts for
your other AI partners one more tactic that's still in the framework of the importance of context
is to avoid the instinct to start over sometimes when something isn't working it feels like the
best idea would be to start a new conversation and certainly in some cases that's right but when you
do that remember that you're throwing away a lot of accumulated context not just in terms of
decisions but things you've thought through and rejected ways of looking at the problem that
haven't worked out that can be really valuable so you really have to have a very high burden to just
start from scratch lastly and I cannot recommend this enough while there are some very specific times
that you want hyper precision and you might want to type something out you will move so much faster
if you are talking literally with your words to the AI instead of just typing now unfortunately
you probably know that the native Texas speech your devices isn't very good luckily for all of us
there are an increasing number of tools that are much better I use whisper flow and while it's not
perfect I literally would be moving about a third as fast if I didn't have it the single biggest
speed pickup that I can offer you probably is making the switch from typing to talking so that's
the idea in this new age and age the way that we learn before is just out there really isn't a
better path than just diving in and while that may have always been the case that practice speech
theory the difference now is that you have this unbelievably powerful partner in a way we never
had before all of the things that might have made you feel nervous about just starting the volume
on those things is turned way way down because you can simply have AI as your partner for that learning
process so that's going to do it for this bonus builders episode hopefully this was useful and if
it isn't yet maybe flag it and come back when it is appreciate you listening or watching as always
and until next time peace
The AI Daily Brief: Artificial Intelligence News and Analysis
