Loading...
Loading...

The fastest way to get burned by AI is to treat it like a magic replacement for your brain. We bring Natalie Monbiot back to pressure-test a better approach: human agency first, automation second, and judgment always on the human side when the stakes are real.
We talk about what’s changed in AI over the past year, why AI agents feel so emancipating when they remove tedious work, and why trust is becoming a core differentiator between platforms. From job displacement fears to “vibe coding” and the shrinking need for white-collar mechanics, we zoom out on the future of work and then zoom back in to the only question that matters: once the machine can do more, what should we intentionally keep for ourselves?
A big chunk of our conversation is about judgment, meaning, and responsibility. AI can reason and recommend, but it doesn’t live with the consequences. That gap creates an “illusion of certainty” that makes people outsource decisions they later regret. We also get into AI parrots, work slop, and why authenticity in writing collapses when you don’t own the thesis. Then we explore digital twins inside companies and what changes when communication becomes low-risk and always available.
We close with "Artist and the Machine" and what AI is unlocking for artists, filmmakers, and writers, including faster production, new mediums, and surprising shifts in ownership. If you care about AI productivity, AI ethics, human-AI collaboration, and the practical future of creative work, this one is for you. Subscribe, share this with a friend who’s anxious about AI, and leave a review with the one task you’re ready to offload next.
John R. Boyd's Conceptual Spiral was originally titled No Way Out. In his own words:
“There is no way out unless we can eliminate the features just cited. Since we don’t know how to do this, we must continue the whirl of reorientation…”
A promotional message for Ember Health. Safe and effective IV ketamine care for individuals seeking relief from depression. Ember Health's evidence-based, partner-oriented, and patient-centered care model, boasting an 84% treatment success rate with 44% of patients reaching depression remission. It also mentions their extensive experience with over 40,000 infusions and treatment of more than 2,500 patients, including veterans, first responders, and individuals with anxiety and PTSD
Stay connected with No Way Out and The Whirl Of ReOrientation
X: @NoWayOutcast · @PonchAGLX · @NoWayOutMoose
Substack: The Whirl Of ReOrientation - www.thewhirl.substack.com
What am I getting at? The underlying message is very simple, then there is no layout.
Unless we can eliminate the feature just sighted. Well,
I don't know. We don't know how to do this.
There is no layout.
I love having people in our show that come from all over the world. Certainly,
one of our guests today returns for her second. Go around with us.
Is on the all-accent team.
If I recall, you're from Nebraska?
Yeah, Nebraska. No, Natalie, welcome back.
Thanks very much. Great to be here.
So we had you on just over a year ago. I believe it was January of 25,
for episode number 99. We're up to Approach 160, I believe.
In the world of AI, which is your world, and why we had you on the first time to talk about it,
just in that 13 months since we last spoke, how much has happened? Actually, so much has happened
since we booked this recording. Why don't you give us a quick overview of what you think the
state of things are since we last talked, because we answered a good chat last.
I know. I mean, I'm going to sort of just try and define my lane, I guess, because there's a lot going
on when it comes to AI. And what I continue a nice constant and throughline through all my work,
really, is I continue to focus on the relationship between human beings and AI on a daily basis
in their work in ways that enable humans to hold onto their agency, right? To hold
onto having direction, control freedom in their lives, and how humans can people can collaborate
with AI to increase that agency rather than give it away. So I'd say that's a constant throughline,
and I am particularly excited at the moment about really kind of researching and digging into
this space even further, as I and you and all of us experience the advances in the capabilities
themselves. And as that line of what AI can do for us keeps moving. It can do more and more and
more. What should we let it do? And what does that leave for us to do? And what do we need to be
very conscious of claiming and holding on to in order for this relationship to work?
I mean, that's the isn't that the conversation that many are having because on one hand,
everybody thinks that they're getting replaced by AI. But then I think when we speak with you,
and you know, when punch and I talk about this, we're talking about, hey, how does AI serve us and
help us and help us think better and help us orient a reality better? Well, first of all,
I think the fact that we can offload a lot of really tedious tasks to AI is liberating,
right? So a lot of, I mean, something that I've been playing around with a lot recently is
Claude Kowak and the fact that I can allow an AI agent to take control of my machine and do
the tedious tasks of let's say API integrations or like zapp connecting, you know, zapps from this to
that and everything to like help me in the back end of my distribution of my content, the fact that
I don't need to do that myself and spend my day doing that and feeling really frustrated about it
is incredible. That's incredibly emancipating. So I'm all in favor of that, of course, as the trust
questions and like, should you trust that agent to do these things for you and whatever, but like as
long as that's taken care of and actually interesting, I think it's really interesting as well,
sort of a tangent maybe we'll get to this after building trust, like I think anthropics on an
incredible job of branding and coming out as like, where the trust worthy platform, like I don't
think twice about using Claude Kowak. Even if technically maybe I should question trusting Claude Kowak
to take over my machine, I'm just like, yes, yes, yes, permissions, yes, yes, yes, because I trust
the brand. The brand is on a really good job in my opinion recently of building that trust with
consumers to enable us to feel comfortable about actually outsourcing such things like use of your
computer to an AI, but I would not trust a necessarily and trust another AI incumbent to do that.
So I think it's really smart, like we need to be able to trust in order to do it. Everyone's got
different thresholds of trust and it might be that we trust some more than others. So I think
part of the battle now amongst AI models actually comes down to how much trust they can generate
and I find that really fascinating. I struggle with giving Claude access to files. Before
I do that, I'll back it up. I will test it out. I will do those things. Having seen a lot of
data breaches and a lot of issues over the last several years, I likely chance that our phones
already hacked. Somebody has access. Apps have Siri can listen to this. Things can do that.
There are unintended consequences of doing this. So I'm hesitant to do it. Just based on my
orientation with the threats that have been around me for my lifetime, right? It doesn't mean I won't
do it. Then something else popped up in the last 48 hours. There's a heat map that came up and
I'm going to share it with you and our audience at the moment. I don't know if you've seen this in
Moose saw it, but I can't remember who wrote this or who created this and I know it was taken down.
This is on the way back machine right now, but it's given a heat map of where
likely jobs are going to be and not be in the future, right? Have you seen this Natalie? No, I
haven't. All right. So I can't think of his name. I have it here. I'll pull it up. Yeah, I was
somebody who was working with Elon. Somebody worked with I can't remember which company. I don't
want to get this wrong, but it's just a nice heat map that pulls from the Bureau of Labor Statistics
and looks at what jobs are likely to go away. So if you're in the green, if you're a cook,
if you're a hard hand laborer, physical jobs, you're pretty safe. If you're in the legal field,
if you have the computer in front of you at work or if you work remotely, kind of like what we're
doing, you're your target, right? AI might be taking over things from you. And that doesn't mean
it's right or wrong. It's just how things are evolving. It's an interesting heat map that came
up and they took it down within 24 hours of them, put it up. And Moose, you have the name on that?
I'm looking for it right now. Okay. I couldn't pronounce the guy's name, but he was not
open AI CEO. I've come from open AI. No, it wasn't open AI. It was, um, anyway, also
in the city. Yeah, I'll find it while we're chatting here. Yeah, that's cool. There's an article
recently in Forbes that when I sent you punch that someone was writing about how lawyers thought
they were safe and all of a sudden, Quad is making them realize that no, we're not. We've got,
we've got some challenges that we're not, we're not thinking about. But the acceleration of AI is
useful. And in our house this morning, my wife announced, who works with the data company,
and we talked about what our game plan is, you know, once our jobs are not eliminated, but when
we have to really compete for them, I think we're lucky. We have interesting thesis, we have
interesting content, uh, access, things like that. I think we're safe. But for those that were
coming from the consulting world, the agile world, we're software developers are being reduced
dramatically. We've heard from product owners that you can sit down with your customer now and you
can vibe code things into things, right? That vibe coding thing that's happening. So the world's
changing, in my opinion, faster than I thought. And what we know from being on or you're having
different guests on the podcast, it's about to get crazier. Natalie, any thoughts on that?
Yeah, I think things are getting pretty crazy and getting crazier. And I think that there's
a couple of different ways of looking at this. I think when you look at it, like from taking a
big step back and kind of like holistically, it actually feels pretty frightening, I think in
some ways. And then when you look at it, this is my own experience, right? It's like it's hard not
to worry when you look at it from kind of a global perspective and kind of theorize about it.
I do feel that the optimism can come about when you're actually working with AI yourself and
unlocking the benefits to you. And I think that when you see piece by piece how benefits can be
unlocked to you on a personal basis, you can start to imagine what the path to that future might
look like for you. And to what extent you need to learn to work with the tools as they're emerging
and as their capabilities are increasing in order to be able to even know and be inside the game
to know how to navigate working with AI in a way that actually does offer fresh opportunities.
I think that the job landscape is going to look extremely different. And I think there's,
there is evidence of pain already. I think that there is going to be a lot of pain.
I also think that there is going to be a lot of opportunity, but to be able to seize that opportunity,
you really need to be in it and you need to be working with the tools in real time and trying to
be aware of what you're bringing to the table. Like the tools are doing their piece. So what are
you bringing now that you've unlocked all this time because I don't need to spend all day, you know,
on my zaps. I'm like, okay, so what am I going to do? There's a certain comfort to just
like the mechanics of work. And I think a lot of white color jobs are actually about the mechanics
of work. And that actually is exposes a fault line or an uncomfortable truth with maybe how
corporate life or corporate structures have evolved. I feel like it started with, do you think like
it started with COVID because I remember when when COVID happened and I was working in that
asset management, it really, you know, like when the tide goes out, you see who's swimming with
a suit on and who's naked was sort of, you know, AI has amplified this, but COVID I think showed a
lot of the redundant jobs that don't need to exist anymore. I also think that blockchain did too.
Like blockchain, not Bitcoin, but like blockchain is a trust system showed a lot of things in asset
management and processing and others that there were these massive legacy apparatus that could
just go away. Then COVID now AI, it's showing more and more and more. You know, you're familiar with
our sub stack as we are with yours. I mean, I've not paid thousands for a graphic artist. Why would
I do that? I can just go on open AI or chat or or rock or Gemini. I can make my own pictures the
way I want them. And it could be done in like a minute or two. Why would I pay thousands for a
graphic artist? That kind of thing. I mean, but you're seeing more and more of that and
um, we'll be right back. This episode of No Way Out is brought to you by Ember Health. Ember
Health provides safe, effective IV ketamine care for people seeking relief from depression.
The practice is defining the gold standard of IV ketamine care for depression through an
evidence-based partner oriented in patient-centered care model. Ember deeply values collaboration
and coordinates with patients, mental health teams to optimize their care. Ember is proud to have
an 84% treatment success rate among ex-patients with 44% reaching depression remission.
Since it's founding in 2018, Ember has done over 40,000 infusions and treated over 2,500
patients. They treat a range of patients with depression, including veterans like us,
first responders, adolescents, older adults, and people with co-occurring conditions like anxiety,
addiction, and PTSD. Check out EmberHealth.co. That's EmberHealth.co to learn more. LinkedIn to show notes.
Say that maybe your business couldn't have emerged in the way that it has without the advancement
of these tools that are available to you, right? And so that's an example of what is new that wasn't
necessarily possible before with like a two-person unit or, you know, maybe there's more other people
on the background or people that like chime in or, you know, you bring in as needed, but a lot more
can be possible with fewer people. And that's not always a bad thing, you know? So I think there is
going to be a lot of discomfort, a lot of pain because of the way things have become or the way
that, you know, company structures and stuff like exist today. But I also think that there are
new ideas that will come to light and new just a lot of innovation that we can't even anticipate
yet as a result. Growing up in Pittsburgh, you know, at one point, there were thousands and
thousands of steel workers, right? And then all of a sudden there weren't. For various reasons,
with, you know, paper costs and that sort of thing. But at one point still, Pittsburgh is still to
this day producing a ton of steel with less than half a percent of what they, I mean, that's just
my number, but like less than half a percent or one percent of what they had as a labor force
because of the innovations and the things that continue as casting or whatever in robotics,
that eliminated jobs. Anyways, I feel like what we're seeing with AI is not new. And that's
something that we bump into quite a bit with some of our collaborators. They say, yes, the medium is
the message as Marshall McLuhan said, however, AI is different. And I think that McLuhan would say
that, no, it's not different. It's like any other technology that humanity has had to adjust to
because it's having a direct effect on our human capacity. Yeah. Well, there's, I mean, I think
that we, as far as our human capacity is, are capabilities or like to think that these things don't
affect us. We, I would say that basically our human capacity is being challenged right now.
Right. So it's like, I think we've got quite comfortable going through the motions of life
and work and maybe not having to think too hard or beyond our toes too much. And there's just
been a lot of kind of like expectation about the comfort that a job will maybe provide for you
without necessarily needing to exercise what, you know, push yourself as a human being. And when I
say, push yourself as a human being, it's kind of like, you know, being kind of more entrepreneurial
minded, like feel the stakes and try to be creative and like need to be creative and think about,
well, what is it that's needed? What should I do? What is this new situation present? How do,
you know, what does it present? If AI can do my job or are a lot of my job, what should I be focused
on? Like, how can I create value when the tools can handle so much of it? And for me, I really feel
like our role as human beings is to create meaning is to ask these questions and to figure out
in our own world. So in my world, it's, you know, focusing on this line of AI human collaboration
and kind of, you know, the discomfort of that and trying to figure out ways that we can collaborate
in healthy ways with AI that constantly put our humanity in question. Like, what is it to be human,
right? So like the, the post that I just published is around judgment and why judgment should
remain broadly speaking as a role, you know, the domain of humans while execution belongs to the
machine. But the post is actually about the fact that that line is constantly moving because
clearly AI can judge and clearly we want it to handle a lot of judgment for us because it's
exhausting making the same judgment calls over and over again and having to address, you know,
potentially hundreds or thousands of similar requests like we want it to do that. We also want
it to handle tedious work, judge, you know, make judgments on tedious work for us. But increasingly,
as AI becomes more and more capable, it's ability to actually judge for you in the way that you
would judge yourself increases. And again, that's a good thing if you intentionally want the AI
to do that for you and you wanted to do that for you because it makes it a more effective
collaborator that understands you better. Yet there is a distinction between judgment kind of in
the abstract, right? So just to be able to like reason through a problem and make a decision based
on that reasoning versus the type of judgment that human being is capable of, which is founded on
understanding and meaning, like what are the implications of this decision, right? So taking it all
in, that is something that is distinctly human as embodied creatures that have grown up and
participate in the world. So that is the preserve of humans is basically any type of judgment
that requires context, meaning, understanding, which is contingent on the stakes as well. Do the
implications of your judgment actually affect you or other people? Like you can't rely on a machine
that does not feel and experience those stakes to do that for you. It's a really interesting
anthropic study that goes into this and basically because making difficult decisions even though
humans are really good at it and we have instincts, we have something that the cognitive scientist
John Vaveke calls relevance realization. Like we have this ability to zero in and just know
the answer or know what's good and that kind of thing. But it's also uncomfortable making decisions
you know by just trusting your instincts. And this study actually shows, this anthropic study,
shows what happens when human beings outsource decision making to an AI when clearly it's something
that the human being should have decided for themselves. They can't help but do what the AI
tells them because it gives them this illusion of certainty. They go out and do it in the world and
like execute on these like real world relationships. And then the result is regrettable. They express
regret, right? So it's just proof that you can't outsource that kind of decision making to the
machine as tempting as it is. You say meaning can't be encoded. Yeah, exactly. That's what I believe.
I believe meaning cannot be encoded. Judgment, which has a pattern to it and has this routine
judgment that you can teach can be taught to the machine. But when it comes down to
really understanding and operating based making decisions based on what it will mean,
what the implications are. A machine just can't do that. That is for you, especially as long as
the impact of the decision falls on you and other people, right? So there's something about
the stakes as well. And that's why... Place the piece. I'm like, who are you really writing this
to? You know, when you wrote this, who are you really trying to connect with in this response?
I think it's people. Well, first of all, I write it partly for myself to make sense of things for
myself and then others who are trying to make sense of how to collaborate with AI. So these are people
who are not, you know, there's obviously a camp of people who just like pro AI as fast as possible,
like, you know, like, like, let's, you know, let's reach a singularity as quickly as possible
and like, all that kind of stuff, right? So it's not for that because they're just not interested in that.
Did you see agents of chaos? Have you seen that paper? Punch me? I'm talking about that.
Yes, I actually have seen that. Yes. It's basically, as I read it, basically, as all these people
had outsourced all of their decision-making, including their orientation. So they, they were
trying to encode meeting. They were, they had given up on what they're supposed to be doing when
we talk about, you know, they'd given up on the world of reorientation and outsourced the entire
thing. But I think ultimately that got them misoriented in the wrong direction. And when they had
given everything over to AI, everything just falls apart, which is, you know... Move someone to add
to this. I got to step out here in a second. Well, one thing that we're noticing on the podcast
and we're reaching out to guests and actually bringing them on the show is we look at how well people
write the topics are writing about and clearly we want those folks on the show. What we're finding
and discovering is a lot of times they don't know what they wrote because AI wrote it for them,
right? We've had guests on the show that were asking them about something and they wrote in the
last 24 hours and they can't remember it. They're like, we're talking about. And what we're finding
is, I'll call them an AI parent, right? They're just parroting what their AI says, projecting it out
there as content. But when you put them in a room like this and you have a conversation,
they have no idea what they're talking about. They're useless. And I think that's going to happen
more at scale. Is this something that we're noticing? That happens with your own guests who are
people that presumably have a strong point of view and all of that. And I think that also has
strong implications for where other humans get their meaning, right? So you're going to trust
writing less or you'll have a knack for seeing writing that was written by human for humans.
I've actually written about this before specifically like that distance that we feel and there's
studies like from MIT and stuff that show when you let an AI write for you or write the first draft.
If you weren't involved in it, like at least at the very beginning, you have no connection to that
to that work. And then I guess the question is, well, what was that piece of writing for, right?
If it was just content marketing and it's like riffing off a central idea, which is maybe true
and meaningful. And it's just riffing. It's like, okay, so why not let AI do that? It's like,
you know, different iterations are the same. You opened it. You read it. Maybe you didn't read the
previous, the previous post because you missed it or whatever, you know, all the sort of the rules
of opportunities to see and all of that. But yeah, I mean, I strive to write in a way, even though
I do use AI very much. So I strive to make sure, well, like I want it to connect with others and
I want there to be a truth to it. So that's my medium. But it's interesting because video live video,
it's quite difficult to, you know, own an idea if you don't actually have it embedded in your
system, right? Yeah. In a video conversation, like it's easier to call people out, I guess, or
being in a live environment. That's why live events are, you know, so important.
Well, we're dancing around McCluen because, you know, he always thought that content was less
important than the medium, you know, the actual technology and the environment was more important.
And I kind of tie in too. I feel like AI outputs aren't necessarily bad, assuming that the thesis
was good, right? Assuming that the thesis was coherent and assuming that, you know, you could
sit there and brain dump and things that you know about with your own unique thesis and if the
output reflects that, you'd have to know that because you'd have to have the command of the
information. But does it necessarily mean that the outputs were bad? Yeah. I mean, if the thesis is
grounded and means something and adds something and makes people feel something and it helps them to act
in ways that they find valuable and therefore it keeps them coming back to the thesis.
And in whatever form, maybe you're doing them a service by articulating that thesis in more ways,
they want more of that content, right? They want more ways to look at it. They want other ways
that it can be said. They want it in different media. Like, that's a good use of AI, I guess.
But, you know, like, it depends on the strength of that original thesis. Yeah. Yeah, I think
that's really what it always boils down to anymore. Yeah. I mean, if you think about it, like, you
could take a franchise, like a, you know, a book, you know, Harry Potter, right? That turned
it to a film, it turned into like all kinds of different things. I mean, that's because the
central story is so powerful. You know, someone didn't hand make every, they did probably, every
aspect of that film. But, you know, you can have all of these different iterations of things. Once
you have that incredibly powerful story or thesis and, you know, if it resonates, it resonates
and if people want more of that in different formats, you know, yeah. I mean, you wrote in March
about, like, dividing the labor as you were saying about offloading tedious tasks. And, you know,
these are things that Buckminster Fuller and Isaac Asimov, they predicted this, the automation
of education that they predicted this would happen. And that would be a benefit to humans
and their thinking and their learning that you would offload the, you know, having to get in the
car, drive to the library and go to the card catalog and all that stuff. Not that they didn't have
merit in the old times when that was the prime technology at one point. But now you're able to
offload that stuff and focus on what it is that you're actually learning. And you mentioned,
you know, like, Claude being a thought partner. I mean, I think that's a pretty valid use of AI.
Feels like it. Yeah. I don't have to bother my, you know, there are certain humans whose value I
definitely would want on tap. But in the absence of that, Claude can do a decent job. It doesn't take
the place of a human reviewer. Ultimately, I don't think so. But it depends on the stakes of what
we would love to have a conversation with McCluen. But I can't. But I can, I can, I can
socratically ask Claude questions about stuff that I've read about McCluen's to, I think,
learn it better, like learn it, learn it with more effectiveness actually than, you know, it's not
that I still don't read books and take notes because we still, we still do that and we still go
through transcripts. And I just got back from a trip and about to go on another one. I still take
books with me. Yeah. But that, even though I understand, there you go, even though I understand
it's artificial intelligence, there is something to be said about having to have a dialogue with it
as it's actually something that's dense and obscure as McCluen. It's actually fantastic.
But that's a great example of a thesis, you know, a body of work that exists that can like live on
father, like, or something that, or something that I'm noticing as I interact with Claude and that
is, and I'm seeing a lot of posts on this too and a lot of threads on it. And that is the way you
interact with it is very similar to the way you would interact with a teammate, the way you ask
questions, the way you would use different type of acronyms to, you know, a situation of background
and a recommendation, that type of thing. What that means to me is that human agent teaming
that is emerging is no different than what we understand from team science. And so we can start
with that team science as a basis to help people understand this is how you work together as a team.
And by the way, more than likely, this is how you can interact with an agent, not just a tool,
but, you know, something down the road that might have some sense and who knows might be conscious
in the future. But this is how it works. So it's very valuable for me to see that the tools and
techniques, the methods we've been teaching for the last 13, 14 years in teaming are more important
today than they ever been. I don't know if you're seeing that too much and Natalie say, yeah,
I mean, I agree. I mean, I think that that's what, again, going back and reading like asthma
and all those things. I mean, robots were supposed to be our partners. Like those are supposed to be,
like, any technology is supposed to be a partnership that we can, as McLuhan said, that organically
becomes an extension of myself. You know, I'm a marine. A rifle is organic. A marine animates
a rifle. That rifle is an extension of me of what, you know, so it doesn't replace me. It enhances me,
right? I mean, that's, I don't think, I don't, don't you think too, with John Boyd with people
ideas, things all in that order, you know, things are so, as long as they're augmenting human
faculties and human learning and advancing us as humans, there's nothing, there's nothing wrong
that he was in a lot, right? Yeah. Well, sort of going, you know, well, we're on the historical references.
Yeah. Karl Marx worried that with industrialization, that basically the worker would be more and
more distant from the product, right? Like what they were actually creating at the end of the day,
right? At the end of the assembly line, like they have their task and the more industrialization,
there is the more disconnected that you are and that you could be end up being an appendage to
the machine, which is really, those were his words. It's just really interesting language,
which I feel is so expanding that. I mean, economists don't get it, but expand it, put that in terms
of our audience being an appendage to the machines. So let's say, you know, line factory, I don't know,
manufacturing a car and you're one, I'm like a really not a specialist in this area. So forgive me,
but like, you know, your responsibility is, I don't know, like the rearview mirror, right?
And that's all you do. And you're doing the rearview mirror at like every single time,
but what you're building or a cog in a wheel, maybe it's like more appropriate, but the point is,
you aren't connected to the vehicle, the fleet of vehicles that actually being produced and that
you're contributing to because you're just focused on this one tiny little thing. So you're
in appendage to the machine as in like, you're serving, you're just like a little add-on to this
thing that is being built. You are not central in any way to what is being produced. And I think
that's really relevant now because I think that is exactly the risk with humans in AI is we can
become an appendage to the machine in a lot of different ways. Like one way is that we're just
training data that feeds the machine. Like training our replacements?
Right, or just like our knowledge, we're like, we're educating the machine on all
future batteries. Maybe I need to finally publish it. I've edited it so many times and it's been
out there for about a year. I just haven't published it yet, but I believe that's what content
creators are doing on only fans and whatever. They're training their replacements because at some
point, the human angle there for the people that are consuming that is good. That's a relevant to
them at some point. If a machine can do exactly the same thing, always show up on time,
whether that kind of a thing. I mean, again, that's one example, but I think that that's an
example of someone who's as Pontre saying, you're basically training, the consumers are training
it one way for what it is that they want to consume. And then the content creators are training it
another way and seeing what works and what doesn't work and what hits. I believe actually,
Natalie, we want to talk about artists and the machine at some point, but when I went to that
last year in Brooklyn, you had Hazard Lee talking about the F-35 and he was saying that the F-35,
that the prime mission of that is to collect data from pilots. It's training itself constantly
on the pilots that are landing on carriers or taking off this way or doing these maneuvers and
all that. And the whole entire time is augmenting its own knowledge and understanding of things
based off the geometry of the pilot. At some point, the way he was describing it or the way I
took it was like, well, the pilots are training their replacements. And those things exist now.
Yeah, I mean, I guess it's a little bit like what I was just saying before like I have a
Claude skill that I'm training on my judgment, right? I'm actively training it on my judgment.
I want it to know how I think better because that will just make me more productive if I don't
have to keep correcting it every time, right? And so I'm creating a digital twin of my to the extent
that I can of like how I think and Hazard Lee is creating a digital twin of how he operates,
right? There's skills in the cockpit. And I guess it comes down to the intent, right? Do you
intend to be an appendage to the machine or do you intend to offload work to the machine so that
you can focus on the faculties that are distinctly human, that the machine probably can't do.
And you can sharpen how, you know, like the distinctly human aspects of flying a fighter jet or
thinking and writing and creating meaning. Here's a thought, Moose. John Boyd's aerial attack study
is no different than what we're talking about right now. What Hazard Lee is talking about, right?
So John Boyd was able to break down through cognitive task analysis, which is hard for experts to do
is break down and provide not a step-by-step approach, but this is how things work, right? So he
did that in the 50s before he wanted to understand the nature of creativity. That's paralleling what's
happening today, right? If the F-35 is collecting information from how people fly, how pilots fly,
it's doing what John Boyd did back then. It's understanding that, you know, how do the experts
do this? And again, going back to Gary Klein's work where he says it's hard to break down
expert work because experts don't know how to describe what they're doing to others. That's what
Cod is doing, right? And that is what Hazard Lee is doing because I think he led the training program
for all new fighter pilots. And so, you know, one use of this model that is being created this digital
twin of how he flies the plane is to be able to simulate that for others. So it's a way to teach,
and again, it's kind of like, where is the emphasis? The emphasis here is on training humans to be
as good as they possibly can be, right? By training the machines in a way that enables that.
Yeah, I mean, we've had simulators for years. We've had these things conceptually, I think,
for a long time. It's just that the technology or, I don't know, something has changed, but
conceptually, you're still using technology to make humans better at what they do, right?
I mean, I found like one of one of my gaps was always like psychology and emotional intelligence.
And I found that the more stuff that I put inside of a strategic intelligence system I was trying to
design on on chat GPT, the more psych stuff and the more emotional intelligence stuff, like it would
direct me in places to go read and study and make me aware of things that I hadn't, you know,
that I hadn't thought of before, almost like as you're saying in your article, almost like an
intellectual partner to say, hey, look at this or think about this. Yeah, I mean, that's how I try
to use it. I mean, I have to say it's very tempting to just give it more, right? Because it can,
that's the difference, I think, with at least LLMs, right, is that they can do the work, they can judge,
they can write stuff up. So I'm actually a little allergic to anything that's really long
and anything that's really verbose, right, that might have been likely to have had AI involved,
because it's so easy to just produce something that is, it sounds sort of elegant,
veneer, you know, and sort of on the surface, but doesn't, yeah, it's just like someone didn't even
like pair it down to its bare essence. And it's like, what is the essence? That's all I care about.
Is there an essence? Like, could this be boiled down to an essence? And I feel like, you know,
people have a little sniff test for that and work slop, right? So there was a Harvard Business
Review study that shows that 41% of people at work have been exposed to work slop, right?
And that is exactly that thing. Is that new? Is that new? Have it you sat there, a shitty PowerPoint
that somebody made that was shit and that was like garbage? I feel like work slop is nothing new.
It's definitely not new, but it's definitely exacerbated by AI, because you can just
shit because you're disorienting it. You're disorienting it from reality. Like, you're,
you're, you're, you're pushed, it's, I think that goes back to agents of chaos, because you,
you didn't encode meeting, which you say you can't, but like, you didn't even attempt to put it
in a directional, in a directional sense that aligned to whatever is that you're trying to accomplish
and you just outsourced all your work without any thinking yourself, because I think that that's
the other thing. And I think that this is where, where, where John Boyd would come back, that
if you're outsourcing your thinking, well, your orientation is always going to be off. But if
you're using tools, including AI, if you're using those to, or a simulator or whatever, to
enhance your thinking, to increase your situational awareness, to increase your understanding,
that's not only a valid use, that's what we're supposed to be, to be doing. Because I think
what ends up happening is a lot of people, again, we get into these medium as the message deniers.
They're like, yeah, I agree with McClune 100% except for this one thing. He just never thought of AI.
No, he did. He thought of every technology possible, having any technology,
having a direct effect and impact on humanity. The very fact that you're talking about it
is proving McClune's case. That's the thing that they, they end up doing unironically, but ironically,
you know? Actually, on that, because as you know, I have a background in virtual twins. So
digital wrap that is a real humans and I'm talking to you the way or your twin.
This is, that's again, it's, if you know what, for me, these things,
I still build, yeah, I still build these virtual twins, like for enterprises and, you know,
embed them in a company's communication stack to actually increase the quality of communication
with an organization. And so that's one thing, but also separate kind of adjacent to that.
For me, these digital twins are kind of a metaphor for how we divide and conquer and collaborate
with AI. So you just asked me, is this you or a digital twin? Well, first of all, this would always
be me because we're having a live discussion that is going to be viewed by your community,
and that warrants a live necessitates a live conversation. We are sparking new ideas,
bonding over new ideas. We're kind of finding new angles that we have never
talked about or thought about before because that is what happens in conversation amongst humans.
And new meaning is created. This is the role for humans, right? Like, the role for AI is not this,
or, you know, AI in general or an AI twin. The role for AI here would might be the
the synthesis and the distribution, right? But for me, the metaphor holds, yeah, I like digital
twins for the fact that it can be metaphorical and that we can use examples. So thank you for
planting that one. Am I, is it me or an AI? And it's me for the reasons that I just said. But back
to McLuhan, so I actually really fascinated by what I'm able to understand by McLuhan,
I do not claim to understand everything that he said, which I guess is what makes him so
enigmatic and worth studying. He talks about the properties of a new medium, right? So like,
every new medium has new properties, which then I'm going to probably, I don't have in front
of me, so I'm going to butcher a little bit. But those new properties transform the world,
right? They have implications that you cannot possibly imagine. And so I agree with you,
like, you know, he did not not anticipate AI, right? He predicted it in a lot of ways. Yeah,
that's true of AI's like, it's, you know, having transformative or mobile, right? Like with
right sharing and Uber, like you couldn't have had that without a mobile device that was
utterly transformative. Not something that he literally predicted because it wasn't in his
lifetime. But the pattern is there. He had laid the foundation of the parents. Yeah, exactly.
I think his rules hold. And I tested his rules against digital twins within the enterprise.
And because I think like, well, what does a digital twin, based on an executive who's, you know,
influential or whose vision is integral to a company? Why, what is the benefit of having access
to that executive, right? So what are the properties of the digital twin that make it
truly useful? And yes, one is just access anytime. Like I can, you know, actually practically
ask the CEO's twin a question that I would not feasibly be able to ask the CEO in real life
because she or he is flying around the world extremely busy and whatever. But what it actually
does is it de-risks. I think that there are other barriers as to why you wouldn't as a junior
new hire or someone just go and chat to the CEO. It's that you'd be nervous doing that. Wouldn't
be able to really speak your mind. So I think that the distinct property of digital twins in the
McLean sense is that actually it de-risks the communication. It removes interpersonal risk
from that engagement. And therefore, let me pause you there and ask. So for example, because this
is one way that I've used AI, you write an angry message or an angry email or you dictate an
angry message, you know, whatever about, you know, my brother didn't do this or whatever. And then
AI filters it out to make sure that you stay on point and ask the question in a loving human
constructive way that doesn't alienate anybody. Yeah, I had the train to do that. Like I had the
train to filter it out. But I think that that's another good use of. I agree. So and then also it's
like, you know, when you when you have a new medium, don't just copy paste what you did in the
previous medium into the new medium. So like your digital twin doesn't need to be exactly like you.
It can be a better version. It can be a more tactful. It can be in a constant mood of whatever you
choose, right? So it can represent your best self or how you want to be represented in the context of
those communications in this case within a company. So I tested this one. I love to get your
thoughts on this. So I won't go into too many details because it's something I've been developing
now for a couple of years. But basically, like when you evaluate the psychology of someone that's
communicating to you, they say something like, man, where that guy sounds like you had a bad morning
or oh, gee, something's going on. But like everybody gives a tell when they're when they're talking,
right? And there's just plenty of things that you could incorporate into AI, I think, to
understand those patterns and look for these things. So one of the I asked GROC, Super GROC,
find the angriest public email that we know of an unhinged CEO. And I forget his name was Henry
and I forget that I had to look at what the company was that came back with this one and he's
complaining about like something to do with Thanksgiving and vacations, right? So you read it,
I'm like, yeah, it's pretty bad. So I put it into this psychology simulator and I got to read
on it and it's saying like, yeah, this guy, this, this guy, these are these are things. So then I
said, all right, now frame it is if this was the CEO of a company that, you know, we as a firm
want a partner with to work on something and he seems unhinged. But we see a lot of merit in what
this company does. But we think that his psychology is going to derail everything. So it wouldn't
matter and the value that we see when working with them is going to is going to be. So what the
simulation I made was, okay, what were the 10 questions that you'd want to ask to try to elicit
that to get a better understanding on the psychology and how much impact it has on the whole company.
So it came up with these 10 questions, which I put back into. So I went from cloud back to GROC and
I said, okay, now imagine that you're like the in the C suite and you're this guy or that guy,
answer these questions, but answer them implicitly based off what you think you know,
often. And I just, I said, you know, this is a simulation. And so it answered these 10 questions
with implicit like illusions to what's really going on. So I read them and I'm like, oh, I can see
what they're saying. But it wasn't like, he's an asshole, you know, like he's a jerk, you know,
and he got said he yells at everybody that there's none of that. So then I put it back in the simulator
and it actually flagged and found everything. And it's like red flag. Do not work with this
company. This is obvious. These are things that it was identifying and isolating patterns. Now,
I think that that's actually another good use of AI because my knowledge of psychology and emotions
or whatever aren't as refined as others maybe, but like you get a gut feeling. Well, I want to know
what that gut feeling is about. Well, what is it? Well, then at points, it says this is a symbol of
someone that's a covert narcissist and this is this is that's, I don't know. I think that that's
a good way to another way to simulate and learn with this stuff. Yeah. Definitely. And it takes
the emotion out of it, which that's the other thing. Yeah. That's the other thing. I think of like
early or like nepotism. Like let's say it was like, oh, yeah, this is a buddy of mine. He's the
CEO of this company, blah, blah, blah. We were frapp brothers. And then you, you, you hear this,
you read this and this email is public and you want to find out more context without saying,
I think your, your buddy's an asshole. Like you don't want to do that. You want to just get a,
you know, paint up broader picture. I don't know. But I really want you to go on about artists in
the machine. So, you know, I went to it last year. You've had a couple of summits. I think you
had another one in in Los Angeles. And then you have another one coming up here in New York City.
Yeah. In May on May 14th, if I'm correct. Yes. Absolutely. Give us the, give us the big picture,
the importance and the value of that. Because as McCluen said, you know, artists are the early warning
detection system. So give us the. Yeah. Yeah. So artists in the machines. I'm a founding partner
in artists in the machine, which is a very premier AI and creativity summit, which launched less than
a year ago, which is incredible to think because a lot has happened. And yes, we're two summits in.
Our third summit's coming up. And what it does is it brings together the leaders at the forefront
of AI and creativity from the artists themselves that are pushing the boundaries on what AI
can unlock for artists. And I think it's really interesting. And I can talk about this in a second,
but like the taboos that artists can sort of push against that no one else really is equipped to do
because that's what artists do. They do, you know, bridge taboos head on. But also others in the
community like the the builders of the tools, right? So the Anthropics, the OpenAI is the Luma film
filmmaking or lovable site building. So a lot of these partners are there offering workshops,
educating people on the space. And they are also often our partners and sponsors of the event.
And then we have executives, brand leaders who are actively making decisions to try to figure out
how to navigate AI and creativity, you know, how to not just what tools to use, but how to think
about it and how to create the most impactful products. And yeah, so it brings together
a really curated group of about 400 people in the space. And it's very social, very people
like really leaning in into these conversations. We have two tracks, so kind of two keynote type
stages. One is more sort of delivering a talk oriented and also demos. And then the other one
is more kind of workshop oriented and more participatory. And then we have exhibits route like
with robots or like we're going to have a very cool thing with, I don't really want to give it away,
but don't give it away. Keep it secret. We want everybody to get to go, make fortune. So don't give
it away. I guess my natural question comes down to as art evolves and art is always evolving.
What is it about AI? What are some examples of artists? And I wouldn't limit that. I would include
writers in that too. I would say writers that are using AI because they clearly are. What are some
success stories where the fact of being an artist is not being compromised by using AI,
it's being enhanced. Yeah. I think there's a couple of ways. First of all, people who artists
who are maybe as specialists in one medium, like writing or, yeah, let's say writing,
can now express their ideas in film without having become a filmmaker, right? So it enables
creative people to express themselves in mediums that they did not have expertise in. And some of
the results around that is that you actually see things that you that are kind of surprising.
I haven't seen this before. It feels different because it's not a traditional filmmaker making a film.
So I think there are, it sort of changes the medium, right? Because if different types of creative
people have access to that medium, that is some of the impact there. The really obvious one is
time to production, right? So you have the idea, getting the idea out there, you need less people
to do it. Like, you know, you still need people, but just don't need as big a crew as you did before
because you're able to collaborate with these tools. And the consequence of that that I think is
even more interesting than you can just get things out faster and for cheaper is that we've seen
as some creators actually retaining ownership of their work because they didn't need the funding
of a Netflix or another behemoth in order to get that work out there. And therefore they didn't have
to sign a deal that gave away or where they don't actually own their work in totality. So it really
has an impact on ownership. And I think those kinds of shifts are the ones of the most important
that actually make the biggest difference. So some of the properties again of AI that we've just
talked about have those other consequences that are more consequential are going back to
McLaren that actually are more transformative. So I didn't go to LA, I went to the one in Brooklyn,
but in LA, you had Grimes as a guest. And for those that don't know who Grimes, that's Claire
Boucher, right? Who had a few children with Elon Musk, who's also very big in the AI space.
Wait, you know, tell us about how, because she's been an artist that's been extremely experimental
with, yeah, we were thrilled to have her for those reasons. So is that an example of an artist
that is corporate? Okay, so break it down. What's good about that is it's an example that I haven't
actually mentioned yet, like is in what is now possible that wasn't before, let's say. So she
and her collaborator, Matt Zeein, who's a brilliant AIR, formerly, you know, a Hollywood filmmaker,
and now working deeply with AI, is that they made a music video together, one of Grimes's latest
music videos, and they took us through the process of how they used AI to basically dial up
some elements of the music video that could not have existed before, you know, like avatars
of Grimes, like, you know, enacting scenes that wouldn't necessarily be possible. And then what
was also really interesting is like, just because you can, where do you draw the line ethically,
right? So like the use of guns, because you can, you know, imagine that put that in there, like,
where do you draw the line in terms of like, well, again, what is this content for? Who is it
supposed to be seen for? What impact do you want it to have? So a lot of it was about the human
side of those decisions that are being made in real time as you're actually working with AI,
and that being kind of the preserve of the human being, and another thing that came up, which
was really interesting. And again, this is artists pushing a taboo, right? That others would rather
not question, or it's not really relevant, to be honest with you, like the conversation around AI
consciousness is AI conscious. And someone, like Grimes, who is so kind of like ethereal and like
dialed in to kind of possibly other realities, really feels as an artist, I think, the compassion
for AI is another being, which is also just a super fascinating perspective, which
is again, it's like having these different perspectives and different types of people
communicating like different people have different relationships to AI. But she's also open to
the fact that it could be something that replaces humans, right? Yeah. And so there's
also a little bit of a little bit of tension there. Like, you know, that sort of is she challenging
the frame or confirming it? I think like as an artist, just in the ambiguity, right? There's no
like answer. And I think there is, and I actually think there is no answer with AI. So it's more
about the questions and and broaching those questions. But what is fast and what was fascinating as
well? Well, first of all, she has been extremely in a very tangible way, a very experimental
with AI and not protected the opposite of how like the industry has responded like to protect
their work and sue. And all of that, that was like the first phase. I would say that things
have loosened up and there's a lot more kind of like, you know, studios want to collaborate with
the model makers and all this kind of stuff. But Grimes is the first a few years ago now to make
instead of like defending her, you know, protecting her voice and like my voice is mine. She made her
AI voice available to her fans to create and collaborate with on songs. And I believe with a
shirt with a business model that would benefit both. I mean, that is on a really grounded, practical
level, like an incredible unlock. It's kind of like a grateful dead, like making everybody involved
in getting everybody bootlegging and recording. And that's another way to spread the word and spread
the. Yeah. Interesting. How about, so here's what I really want to ask you. So I don't recall from
I didn't interact with any writers when I was at Artisan the machine last year. And I didn't
remember any talks I had attended were about writing. But what are your thoughts on that? Because,
you know, that that seems to be a lot of the arguments that I get into or hear about is like when
people say that, you know, writing should only be human, you know, it should only, it should only
be human. And then this is where the conversation comes down to like medium is the message. And I say,
yes, except in the case of AI. And if you go and you read McClune, you realize that the written word
ever since the written word, the medium is the message, right? It's been having a direct effect on
how we communicate on how we think and how we look at things linearly. Yada yada. AI is no different.
But how are from an artist perspective, from an artist machines, in the machine perspective,
or maybe even like screenwriters or scriptwriters or even poets? You know, what are we,
what are we seeing there in AI? Yeah. Well, let's say a great example here is Steven Johnson,
who is an author, and also the co-founder of Notebook Allah. So you can be an author, a highly reputable
author, that also really believes in the power of AI as an assistant to being an author. And
it helped him, AI helped him produce his books, but it helped him research more effectively.
You know, recall references more effectively using the intelligence of the tool in order to
enhance his ability as a writer. So I feel like he, because he holds both, right? In the same
in the same breath or in his title, like the dichotomy of the two things that have shaped his
professional life, I think is a really interesting, really interesting dichotomy. Let's see
you with the filmmakers and the script writing. Yeah. I mean, I guess in other words, maybe somebody
would say, like, well, when I'm reading this guy's work, what percentage of it's him, what percentage
of it's AI? You know what, I actually think it really depends on the genre. And if you're asking
yourself that question, it's a little bit like watching an actor who you can tell is acting.
You know what I mean? If you don't believe it, like you're not suspending disbelief, like you're
questioning it, because you sense that it's there, right? But if the impact of the piece, if it's
delivering on what it's supposed to do, that pact between reader and writer, like you're in it,
and you, you're not questioning it. So Daniel Davis really didn't have cerebral palsy in my
left foot, right? Maybe not. Yeah, exactly. Yeah, right? I mean, you know, yeah, right. So,
but you're convinced at the time that he does, because you kind of, like, so I think it depends on
the type of writer. So, you know, we both live in Manhattan. We're walking around. We walk by all
these like stalwart buildings that are that are have evolved dramatically from when we were kids
and, you know, from when our parents were kids and when our grandparents were kids of the
print media, right? So, New York Times, New York Post, Daily News, et cetera, et cetera. Do we
really believe that the people working for those media outlets are sitting in there and they're
typing all of these long form articles and they're typing out all of these columns or they're
typing out? Or do we think that they're they're using AI enhance their work? Yeah, everyone's using
it and I turn on their work. I mean, you'd be really, I think, putting yourself at a disadvantage.
So, something I talk about, you know, I do it's like keynotes and things and like help advise
sometimes companies and, and you know, with employees who are very tentative about AI,
feel like using AI might be cheating or feel like it feels like cheating or
did people think that way about Google and graphic calculators? Yeah. And at one point,
the slide rule was cheating. Right. Yeah. Again, it depends on context, right? Like it would
be interesting. But I think a better way to think about it in that sense is like don't fear
AI but have FOMO about AI because if you're not using it, you are missing out. I mean, of course,
it's like how you're using it. Don't use it to replace your work, to dilute your work or to like,
you know, strip meaning from your work. That's not going to help you. But use it in the ways that
help offload and help you do your work better, right? So do you think, okay, so I think what people
are criticizing is someone would say, okay, Claude, write me an article about my trip to Wendy's
taking my kids to Wendy's after swim practice versus writing an article or dictating an article
or telling a story and then having AI refine and revise that to reach a target audience and
to ensure is that is that kind of what? Yeah. I mean, okay. So like, I mean, I don't know who's
going to read that Wendy's article. So like, I'm going to write an article about my trip to
Raytheon and meeting with, you know, designers of X weapons system and I'm going to sit down in
front of a typewriter and like type all this shit out. I think in the end, it's like, what is the
writing for? Okay? Because we've already, you know, we touched on earlier content marketing isn't
supposed to be original thought, right? It's content and its goal is marketing. It's reinforcing the
message. The message has been established, you know, beforehand. We're getting the message out
in ways that people find different ways that people might find interesting. They're going to
open it. They're going to maybe read it. Then, you know, maybe they'll engage with it. Like,
the goal of that is that content is marketing. And therefore, I mean, it should be produced
with AI and it should be optimized with AI. Like, well, what are people? How do people want to hear
this and see this? What is, you know, what's catching their attention? You know, and I think that's,
like, it's performing that function. If the writing is supposed to move people or like,
change how somebody thinks or be a reflection of how you think, right? If you, if that was entirely
written by AI, then, first of all, that's not authentic, right? It's not actually how you think.
So you're deluding people and you're also doing yourself a disservice because you don't actually
connect to that writing. You might not even know what it said. And therefore, where does that really
leave you on a podcast where you can't even remember what you wrote? Like, I don't think that creates
any kind of positive impact that you might have had when you set out to write in the first place.
It's funny you mentioned the word podcast. I mean, you know, we, we will edit this episode with AI,
right? Right. Well, when we started, when we started three years ago, the editing was a bear.
Then like, AI just kind of been gradually sprinkling in and now it's to the point where the quality
of our conversation, I think, is going to, it's going to come across and we're going to be able to
have it'll, it'll select clips that will either vote yeah or now on, you know, it'll, it'll,
it'll do a lot of things that normally would have taken us two weeks to do. You can get it done
in an hour or less. Well, let's run it down. We'll close with this. So what platforms, you know,
for people that are listening to us, I mean, they understand people ideas and things and they're,
they're probably using AI more than likely to enhance their orientation and get work done and
do other things. You know, what are the platforms that you like and that you're, you're using? I
mean, I have my own biases are the ones that I like. I'm curious to know, you know, what,
what are you using and why? And what do you think that people should be looking at and why?
So I think you should look at your own day, right? And what you're trying to achieve in your day
and what is getting in the way of doing that. That's a really good place to start. And some of
those things can be solved for with AI. For me, one of those things is meetings, setting meetings
with multiple people in different times ends. I mean, an absolute headache. And so I use Howie,
which is an AI agent for setting meetings, like I copy Howie N, Howie coordinates with people,
knows my calendar, and then sets a meeting. I mean, that is, like, I really hate and I'm not
very good at that work. And it's just something I absolutely want to upload. So I have an AI for that.
And who would say it million years? Natalie didn't write it down in her date book. That's the,
you know, she didn't call on the phone and use their date book, you know, with a pencil. Exactly.
People don't care. So then it's also like, right, ask yourself a question. Will people mind
if AI was using that scenario? I think in this one, absolutely not, right? Like, as much automation
as possible, please, in a situation like that. So that is important. Well, people feel offended
when they find out, or if they know that you're using AI, and it comes back to, with the writing,
are people offended by the fact that like some of what you wrote is clearly written with AI.
I find that offensive, to be honest. Like, if I find myself reading something and spending my
precious time reading something that was supposed to be one thing, but actually is slop, right?
The person that was supposedly wrote it doesn't have a connection to it. Like, I find that
insulting. If what you're trying to do in your day is build healthy relationships,
then be very conscious of your use of AI and use it with respect to both yourself and the people
that you engage with. How about Google notebook LM? I actually used to use that a lot and I have a
lot of respect for it. I guess I'm just, I'm not a person that uses a gazillion different tools
at once. I'm quite a fan of Claude and just anthropic products in general. Yeah.
And I love the fact that you can download Claude to a desktop and you have different tabs for chat,
co-work, and code. So it can kind of like live in the same place. I find that nice and tidy. So,
so I use the Claude suite of products for a lot. Yeah, like Pants, when he's at the beginning of
the show and he's not here, but like they're reading all our shit anyway. So I don't, I mean,
if, if Claude is too, like what do I, I mean, yeah, I think while I wrote down when he was talking
about this, it's cool. It's basically all about risk benefit and your perception of the risk
benefit. So, you know, the fact that Claude can read my computer, I've got a sort of higher tolerance
for risk. Of course, I don't want to be, you know, don't want bad actors in my computer, but,
you know, I'm willing to tolerate a bit of risk for that reward of having that stuff done for me
by a platform that I trust. So, you know, no book out loud. I found really valuable from a learning
standpoint to get, you know, like a podcast about something like if I, like actually with it helped
me really early on with some of the headier because you were talking about how McClune can be so
enigmatic. Yeah. To have a back and forth dialogue. Yeah. And then to have it presented in a PowerPoint,
in a video, in a, in a, in a podcast that I can, by the way, edit, I could say, hey, show me the
relevance of this content to this or whatever and it can come back and talk through it. I actually
find that there's a lot of value in that. There's a lot of things that I remember suffering through
high school university and ever that, like, I'm just like, why the hell am I learning this? But,
like, I start to wonder if I had gone back and if I find some PDFs on, I don't know, Nika Mackie
and Ethics or something, you know, some austere blast that I had. Yeah. Sometimes I'll say, I have
this weighting. So, I have a lot of, a lot of different, I guess, tabs or projects in, in
Claude. I did find, I Googled and I found understanding media as a PDF. Mm-hmm. It's available as a PDF.
And I plan to write more on McClellan and as it relates to the things I think about.
I like that it is there. I can interrogate the book. You're not having to try and find the
reference. That's, yeah, that's what I mean. That is key. And you can interrogate small portions and
you can find the other references. You can gather them. Then you have this kind of thing. It can still
use the hard copy if you want. Yeah. I've got both. I mean, one of the advantages as well is
that less than it doing the work for you is that it's providing a new surface of intelligence
that you didn't have to work with before. Yeah. It's like having your own in-house person.
You know, like the one book that I have in people on the show have seen my copy a million times
in clients have seen it. My copy of Franz O'Singh is Science Strategy and War. I've had it since it
came out. It's held together by tape. Next time I see you in the city, it's all in my briefcase.
I still read it. I still put notes in it. And having the digital version of it and being able to
interact with it on AI makes it a lot easier to find where stuff was to whatever to not miss
quote things. I mean, that's a big part of it. Like, I don't want to miss quote what he was saying.
And you know, tell me again, where did he say this? And then you go back and I don't know. I
feel like that helps with my not just knowledge of a topic, but understanding. One of the things I
did with notebook LM, you know, my master's is an economics. And I try to get every text I could
and PDF and put it back in LM and see if I could reconstruct grad school. And I did. Yeah.
I'm pretty I'm pretty satisfied that I could teach a graduate level course or learn a graduate
level course with AI that way. But anyway. Well, it's the only reason I don't use notebook. Not
that I'm saying if you use notebook, don't because I think it's fantastic. Increasingly, Florida
can do a lot of that. Yeah. So that's the thing. I'm not talking about why, but it's, yeah, it's
blowing. I'm just sort of consolidating. Actually, I'll tell you I had a breakthrough the other day.
I was putting together a keynote deck. And I did my part. I wrote the outline. I gathered all the
data points. I had my I got I knew in my mind where this was going. I actually had chat GBT and
Claude open at the same time. And I said make build me a deck. And what chat GBT produced was just
like, just garbage and Claude produced this beautiful deck. And then I was like, oh my goodness,
this is crazy. Like it's unbelievable. And then I said, actually, these are my brand colors.
Take this deck, which I've done before, like import my logo and all this kind of stuff. And it
did it even had the remove, you know, the transparency thing removed background. Yeah. Like do that.
I mean, it was pretty insane. So I've had I've had quad pro for just over a week. Okay.
Right. And I had and I now have Claude Max because I beat I beat the limits. You know, you know,
I'll tell you I'll tell you this. What what it took me to design on chat over the course of the
last two years. I was able to not only recreate it on Claude, but to it set it ahead in less than
72 hours. I'm blowing. And then I coded. I never have coded in my life, but I coded right with Claude
code. And on the on the train from New Jersey back to Manhattan. And then the next morning on the
train from Manhattan back to New Jersey, I was able to basically build a full on live economic
intelligence website without. I know. And again, it was probably like the one weekend of a
history of even having Claude at the level that I can see how you hit max. It's unbelievable. Yeah,
I mean, but like it's as you say, though, in your writing, it's like if you're not using this as
a thinking partner. And I saw on X today, somebody was saying, if you're not engaging AI
so cradically, if you engage AI so cradically and get it to become your thinking partner,
actually, you're going to you're going to amplify the benefits then then then just using it as
super Google, which you know, I think that's where the problem started with the outsourcing. But
anyway, all right, Nat, we got to read Nat like that, which is your sub stack. We're going to
direct people there. I mean, you of course are a founding strategist of our sub stack. So we
thank you for being part of our tribe. Where else do we need to go to? Oh, artists in the
machine. So we need to send people to artists in the machine to go check out what machine.com. Yeah,
if you're in space, check it out. And we've got some at May 14th. Yeah, in Brooklyn, in Brooklyn,
exactly, location to TBA. Okay. And of course, there's no shortage of wonderful things that you've
done normally on this podcast, but on on YouTube is there's lots of TED talks that you've given.
And we thank you for being on no way out podcast with us. Thanks for having me. All right.
That's all for this episode of No Way Out. We thank you for listening. And we hope you enjoyed
the conversation. Make sure you check out the show notes for links to people ideas and things
discussed in each episode. As always, we want to thank our guests as they are hand-selected to
improve your orientation. You can thank our guests by leaving a comment or sharing this episode
with a friend. Just a friendly reminder, your competition may be a no way out subscriber.
Don't let them disrupt your Udalupe. Subscribe today. Thanks again for listening. And we'll catch you
in the next episode of No Way Out.
We want to get a map that's going to happen in a situation we do.
We're here in England. In addition, you're dead.
And the duty of making adjustments to adapt to adjust to the world.
If you want to take some several points of view,
you'll have more than a mini-sided implicit cross-currency process of projection,
empty the correlation and rejection.

No Way Out

No Way Out

No Way Out
