Loading...
Loading...

Andy, Beth, and Brian open with a wide-ranging discussion on neuromorphic computing, including fruit fly connectomes, biological neurons on chips, and what those advances could mean for future AI systems. The conversation then moves to Andrej Karpathy’s Auto Research project, AI-assisted app building, and Microsoft’s decision to bring Anthropic’s co-work capabilities into Copilot. Later, the hosts discuss labor disruption, Google Search’s evolving position in an AI-first world, and a Harvard Business Review piece on “AI brain fry.” The episode closes on the tension between AI productivity gains and the cognitive fatigue that can come from constantly supervising parallel AI workstreams.
Key Points Discussed
00:00:18 Show open and Monday setup
00:01:27 Neuromorphic computing and neurons on chips
00:14:02 Andrej Karpathy’s Auto Research agents
00:22:02 Microsoft adds Anthropic co-work to Copilot
00:33:16 Tech layoffs and entry-level hiring pressure
00:34:35 Google Search, Liz Reid, and agent-driven web use
00:44:39 Harvard Business Review on AI brain fry
The Daily AI Show Co Hosts: Andy Halliday, Beth Lyons, Brian Maucere
Hey, what's going on everybody? Welcome to the Daily AI show. Today is Monday, March 9th,
2026. Hard to believe. But we're here for another week and other five days of shows.
So glad that y'all are all here to hang out with us. Today on the show we have Andy, Beth,
and I'm Brian. I think it's the three of us today based on who said they were available
or not. So we're looking forward to having a really good show with you guys. And I actually
have some new stuff, whatever I want to hear you guys new stuff too. But then I have a bigger
new story that I think I'll hold for a little bit while longer that has to do with the idea
of maybe burnout with there was a study that came out of Harvard Business Review. But that's
a new story in itself. But I thought maybe like after the news, if we have time, be great
discussion and talk about because it's like the opposite of what everybody is claiming
AI will do for all of us, which is give us time back. And that seems to maybe not be the
case for a lot of people. So it's an interesting conversation. So maybe we'll come to that later
if we have time. But I know there was one big story for me, but I'll hold and see if that
comes out from you guys. But I don't know, Andy, what was the news that caught your eye
over the weekend? Yeah, I become interested in a progression from what we talked about
last week with respect to the neural interface to a big cockroach and being able to control
a cockroach for reconnaissance purposes to several different developments that are in sort of
the sort of neuromorphic computing space. And so the first one is that there's lots of coverage
in the newsletters about eon systems building a connectome. It's a simulation of the actual neurons
of a fruit fly, which is more complex than a worm, you know, or that sort of thing. So they
create a direct copy of the biological brain's wiring. So they're creating not with biology,
but mimicking the biology exactly. It's as if you took a scan of your neurons in your brain
and could build a model, a computing model that exactly mirrors all the connections of all those
neurons. Okay, so reconstructing neuron by neuron from electron microscopy data. So in this simulation,
they have sensory inputs flowing into that digital model and then the neural activity of that
neural network propagates through that full model and then motor commands come out. So
they're kind of focusing on playing with the motor neurons that are a part of that model.
And out of that, the system creates a learning process that doesn't rely on any specific
reinforcement learning algorithms that have been applied to that. It just, it has the ability
when it's built as a model of a real brain to do that without a scripted animation or anything
like that. So Sendia National Laboratories took that connectome and they got speed ups and
you know validated the model's behavior against reference simulations. So they're validating that
this approach of designing a deep neural network based on a real biology works. So Eon, the
company that's behind this, they're working on this and progressing their next attempt will be
to simulate a mouse's brain which has 70 million neurons rather than I forget the number that's
in a flybrain, but it's smaller. Okay. So then there's another company called cortical labs
which has just released a thing called CL1 for cortical labs 1. It's 200,000 living human neurons
on a chip. So they put the neurons on a chip to connect them up, inputs and outputs and those
neurons that are on that chip then can receive electrical signals. They fired their biochemical
responses and then those firing patterns get translated into real world actions by the system.
So it's a computer with actual human neurons doing the computation which is very interesting as
a progression. So that CL1 and its API are now available to researchers and this is biocomputation
where biological neural networks are used for real computing tasks. You can imagine that scaling
up from 200,000 neurons to billions of neurons and these have a real human brain basically available
to use for compute. This is really mind-boggling stuff. It's about the sci-fi AI show.
And the only problems that they notice here is that the neurons die frequently. That was going
to be my question and maybe I don't know where you're about to go Beth, but what constitutes a living
neuron? Like what does it need in order to sustain on a chip? And I don't know if that came up or
they mentioned that or whenever I'm like, yeah, what constitutes a neuron living or dead?
I don't know. There's got any energy. So when it stops firing entirely then I think you know it's
dead. That's easy to detect in the circuit, but I think I'm curious about this now to kind of
dig one step deeper to find out whether the chip that they're describing which holds the 200,000
neurons is in a that. Yeah, is it a solid solution or something? Or is it like a nutrient solution?
Yeah, well I was thinking too. It's got to be something. It's biological, so there's got to be,
yeah, anyway. Well, it's very, very sci-fi sounding for sure. It's referred to as wetware,
which is like, excuse me. If you watched this over a long time, you know this is a ski place for me,
but there, there has been a company, I believe in Norway or somewhere over there that has been
doing this for a long time and it was able to keep it alive for, I don't know, a day, a week.
Last I heard it was a month. So I'm curious whether this is a story because it's moved countries
and now this is happening somewhere where it's a little more controversial or if it's a bigger
mound of neurons and is able to be kept alive longer because those are the pieces that have been
advancing during this time period and I really do think that we're just, that those are continual
experimental but solvable problems. And the reason that this is important is because the human
brain is the most efficient power user that we have. And so like if you can mimic it, yay.
If you can use it, maybe also yay, but there's got to be a conversation at some point about
what what constitutes something that could be human, right? Yeah. I mean, so it's very interesting
to say the least. It's very interesting. And the and the thing that I find really interesting
about the two stories, Jeff said two roots that could lead to understanding about how bio brains
learn absolutely. But one of the side projects for Elon, I think going to a couple other people
in that realm are the idea that you'll be able to upload your brain, right? So I'm going to be
able to upload my learning and my existence into a virtual something. But now then can it be then
added to a human brain on a chip? So now it's back into a human brain that could continue living.
Right. So I think you're leading to the combination of the two news items that triggered my
imagination about this. And the one is using electron microscopy, mapping your individual
neuronal connections, right? Which is what they're doing now with the fruit fly end with the mice
brains, right? So then you know how to imitate that thing. But now let's build it out of real neurons.
So now I have a replication of my brain. And maybe before I pass on, I spend some time talking
to that one so that it can start to learn, you know, exactly how I would, you know, think or act.
Now, of course, ultimately, they're different parts of our brain, right? And much of it is
occupied with visual inputs, right? And processing of visuals. And also motor neurons because we
operate on a body. You know, would there necessarily be if we were going to build this, this
simile of me, would there necessarily be an embodied version of it so that maybe the brain doesn't
move around, but it has its own, you know, facsimile of my body to move around in response to commands.
I feel like you got to get the right person that, uh, that invents this too. You got to have
the right person because I'm just thinking, my daughter's pretty good at math. She's not only
she didn't invent the next bio, bio computer at 15, but even a couple of years from now,
I suspect if she said you tried to get old dad's brain, it would be just for vindication
that she was right about something. As she had my, it access to my brain to prove that she was,
in fact, correct. And I was wrong because I think that would matter more. And we'd be like,
you know, the whole world would be looking at Sophia and going, wow, what a miss. She used all
this brain technology just to prove her dad wrong. You tried those risks to your, your, you know,
future brains. Uh, you know, this is interesting because deep neural networks and artificial
intelligence, as we understand them through the lens of using LLMs as this, this vehicle,
uh, they've gone, you know, really wild on earth here already. Uh, it's artificial intelligence,
and just AI, generally, not artificial general intelligence, but artificial intelligence has
accomplished dramatic things. Amazing things. Just by creating a, not even an exact copy of what a
human neural network is, but rather just building a big neural network and letting it learn by
or teaching it first. And then now giving it the tools through reinforcement learnings or self-taught
systems, just give it a lot of data and it will learn. It will know how a lot of things. That's
amazing. Now, it's not so far fetched anymore in my mind that there may be, as we've been discussing
here so far, another phase of artificial intelligence, which is actual replication of human brains.
Hmm. I mean, look, I, I would imagine a lot of amazing things could happen with, you know,
connecting the equivalent of 100 mouse brains together. That in itself is probably an,
an, an insane compute ability. I don't know if you get a lot of cheese that way. That is also,
hell, let's just daisy chain the living beings and siphon off the power. Imagine hybrids, right?
I want Andy plus leopard. Yeah, yeah, yeah. Okay. Uh, actually, I think that the, I think
where my mind is going with this is the brain computer interfaces that are also being
experimented on and that project as continuing. Um, I don't remember the neural link.
Maybe that's what it's called. And there are companies, the early phase, the name of one company.
Is the Elon company. Um, uh, and I know that there are names of others, but if, uh, if this,
um, if we look at the cockroach study, right, the cockroach, it's not a study, it's a company.
If we look at swarm of cockroach as the example, um, then could you then control
with your, could you get like an advanced agent that was a fruit fly or, uh, cockroach or
something that was able to get into places that you wanted to be able to see personally and be
able to make evaluations. Cause right now it's all feeding back to a computer that has a monitor,
right, and all of those kinds of things, but it's not actually coming directly in.
All right. Cool. I'm good. As I said, this start to a Monday, if I ever had one, my,
my new story more nearly not interesting. Holy moly, uh, Beth, you want to start to follow that up?
Yeah. Yeah. All right. So Andre Carpathi, uh, our buddy, um, has, friend of the show,
absolutely, has open sourced, um, a very small rebook called auto research and it's agents as
autonomous LML researchers. So basically, um, it's a 630 line script that lets AI agents run their
own machine learning experience. That's what I mean by ML, uh, iterate, test, commit only
improvements, uh, and Toby Lutke, who is, uh, the Shopify CEO, used the loop to boost model
performance by 19%. Um, it's for tiny, tiny models. Uh, when I talked about this, uh, with AI
previously, um, uh, I was told that these are toy models. I was like, toy. What do you mean? Like
a toy research lab, uh, right? Like here, Brian, you can give this to your 15 year old daughter.
Here's the toy. Go ahead and just do a little, uh, AI training. But what this is moving towards
is the AI research becoming automated and the self, uh, improvement. Those, um, those results being
committed with, uh, just as a recursive loop. And it's designed to run on pro-sumer kind of gaming
hardware. So RTX 4090 maybe he, he wrote it for H 100s because of course he did. Um, but there have
been forks that go for like the M chips for Apple, uh, the different, um, Nvidia chips. Uh, and you
can get further, like you need a good chip to run it, but you can also run it slower, right? It
because it's recursive. And the idea is that it runs overnight when your hardware isn't working
anyway. So, uh, is this, uh, is this yet another sign that we're moving toward per methea,
Sandy? Uh, where, uh, there are, there are many, many stories coming out about agents being able to
improve themselves and recursive learning. Yeah, interesting. But I want to, uh, follow up on that
by saying two things. One, I have yet to do any open claw type, uh, implementation on any,
on my machine. Uh, I haven't done it yet. I'm waiting for the best version to emerge and there's
a bunch of competing possibilities out there. Um, but in general, what we observe is that the, you
know, the whole app development ecosystem and engineering broadly is being in some ways supplanted
by coding platforms like clawed code, uh, and, and then, you know, those, those systems that will
run on your own machine and design and build a full, you know, deployable production application,
you know, are, are kind of bypassing and, and, and localizing and personalizing what you can do
already with platforms like bolt and replet and lovable, which kind of combine these capabilities
of AI coding and application design development. It, along with a deployment platform so you can
actually publish the app. Well, I saw this morning, uh, the third or fourth ad for something that's
a very different approach to that whole scheme where you can build things that serve you, uh, you
know, on the cloud. And that is this company called glaze. Now, or, and the, the products called
glaze, I think. So, uh, it's, you can find it at glazapp.com. GLA, GLA, GLA, Z, Z, Z, Z,
GlazB, all right, glazapp.com. And what it does is it bit, it's a, it's a, it's a cloud-based
service. It will build an app that's completely contained on your Mac system.
So it's building an app for your Mac. Now, some of the things that will build that, you know,
we might imagine, you know, creating as a production app that would be accessible to lots of
people on the web. Of course, that's, uh, that's a kind of a, uh, a server-based model that requires
a server out there somewhere to generate the app and make accessible through the browser.
This is a different model entirely, which is okay. Now, use coding tools. Just build something
for your own machine. And that can be a very complex app as we know. We're working on project
Bruno and so on. These are pretty sophisticated applications that have incredibly complex
emails and so on. Well, you could build the same thing just to run on your own Mac using this
glazed app. And so that, that's opened a different sort of facet of this thing in my mind,
which is now imagine a world where everybody has an M1 chip at least, right? I've got an M4.
This is the Apple ecosystem and, and, you know, the same thing will happen for PCs as well
as literally, right? But right now it's focused on Mac because of the sort of the unique operating
system environment that that presents. And now I can build an app just for my Mac machine.
I don't have any intention of giving it to anyone else, uh, but somebody could call, I could,
I could offer that application on my Mac up as a clone. I can put it on a git and, and say,
here, here's a Mac app that you can use and you, but you're giving me 10 bucks for it. I'd build
it for me, but you can sell it that way. So I think there's a different model emerging here.
Absolutely. And, and your personal use case, one of the things that I spent the weekend doing
was, um, playing with research data to see what you could find about like customers in terms of
their pain points and what you would build for that. And comparing it to the things that I'm
building for my pain points, I am the customer and I'm immediately happy with it, right? And that
actually is an entirely different, but also very viable model of creating something because you
don't have to do a ton of customer research. You are your customer research. You will know when
it solves your problem. And, uh, at the worst, you've solved your problem, which is very good,
I think, right? Like, this is amazing. Yeah. Yeah. I mean, oftentimes, so there's, well, we know
that we're Project Bruno, right? Like, sometimes you, you ever prom yourself and you seek to fix it.
And then you find out that it probably has potential outside of other people have that problem.
You know, it's like, I think you're looking for, because, and because you went looking for solutions
and couldn't find one, it ends up opening up. And now with, you know, these, all these tools that
we have now, it's, it's easier than ever to get from zero to, is this a thing pretty quickly,
you know, and I think that's what's really, really cool about it. Now, you still have to get from
is this a thing to something, which is, I think, the harder step these days, that I side of it,
but nonetheless. Okay. Go ahead. I'm just okay. We know that cloud code was created that way,
right? Yeah. Right. Internally. Or us was looking to do his own thing. People saw, they said, hey,
you know, friends share with friends. And then the people in the other departments were like,
hey, friends share with friends, right? And then it became a thing that solved individual problems
more than what it was originally conceived of. What did you bring, Brian? Well, I'm glad you
teed me up, actually, because I wasn't going to talk about code so much, but I was going to talk
about code work, because this came out this morning and not a clock this morning. So literally,
a little over an hour ago, this, this posted to Reuters, Microsoft taps in Thropic for co-pilot
and co-work and push for AI agents. So basically what Microsoft has done, it said, hey,
a Thropic's co-work is really, really cool. It only works with local files. That makes people nervous.
We're connecting that to co-pilot and we're creating co-pilot. It's just in beta right now. So
not everybody can get it, but co-pilot, co-work, which at least they kept the same names and
just didn't mess with stuff. You can kind of least go, oh, those are two things together. I got it.
You know, and they don't know how much it's going to cost yet, whether it'll be
equipped wrapped into like a 365 co-pilot license or it'll have different or whatever, but the
goal basically is the ability to use co-work inside Microsoft 365, which by the way,
really move on Microsoft's part if you ask me instead of trying to build something themselves.
Obviously, there's been a lot of people concerned about how deeply connected maybe Microsoft is to
open AI and obviously we've known for years about their relationship, but what you're seeing over
and over again is Microsoft is not too concerned with going and playing with the other players to
an anthropic, obviously, has been making big, big moves here. So the big deal here is that,
you know, this basically becomes something that's shareable among your team, which if I'm being
perfectly honest, that's kind of where co-work needs to be to be super helpful. It's great that
it's on your local files, but that's not really the way companies work and also it's sort of the
opposite of the way a lot of companies work. Companies generally don't want you having stuff
on your local computer because version control becomes a nightmare and stuff like that, but you
can imagine if we get back to just a cloud repository, something that just has like a sharepoint
type folder, whatever, where all these co-work files, skills are being housed and as somebody
stands up a new skill, a new plugin, whatever the case may be, within side of co-work, within side
of shared cloud infrastructure, and layer on top of that, unless Microsoft steps on their own feet
and gets in the way of it, but layer on top of that, all the power of being a one-stop shop sort of
as a Microsoft, this on the surface seems like a win-win-win to me. They can best it up a
thousand ways because it's Microsoft and I kind of, I hate to say it's kind of expect them
to just screw it up somehow and not just do exactly what it says, either they're going to make
it $150 a user a month or something absurd that people can't afford, it's not going to meet them
the main users, it's going to be enterprise or whatever, or they're going to further complicate
how you buy this stuff because God knows it takes 10 minutes to figure out what you're supposed to buy
or not have and what licenses and stuff, but if they get past that stuff,
I am very optimistic about this. It sounds like the right move and I'm really kind of excited.
Now, I'll throw this back to you guys. Would it Microsoft have government contracts and how would
that affect them if they're working with Anthropic? Now, this was the Downston thing on today.
Obviously, it was being built for quite some time and they were working, I mean, just,
it was announced today. It's in beta and they're telling you about it today. Clearly, this is
something that's probably been in the works I would imagine for two months or more,
depending on how things go. It's curious to me. Obviously, I would imagine Microsoft is working with
the US government. How does that affect it if they're now partnering with Anthropic? These are
the things I don't know. I think that prohibition on work with Anthropic has to do with the
programs that Microsoft is doing with the Department of War. They can do this for business
without... Gotcha. There were two statutes and I'm not going to be able to remember the
letters and numbers that go into either of them, but the supply chain risk could be under one
of two statutes. One was bigger, more broad. The other was just specific with the work with
the Department of Defense. But this connects to something that I saw on Friday because there was
just a little post on X that was from Microsoft saying, after consultation with our lawyers,
we have determined that we can still do work with Anthrop, just not in this really narrow area.
And I was like, oh, well, that's sort of nice because it just came out on Friday that like,
no, you're really, we're really designating you as a supply chain risk. And so, nope, we're not going
to... We're not going to remove that from the designation. And then, Microsoft, and like,
just a heads up, we can still play. And that, I think, is happening before they're going to
announce this on Monday, right? Because that would really screw that announcement of
that you've been planning for a couple of months. So, I'll just wrap up, which is a couple
other tidbits in here, because there actually is one line in this Reuters line that's actually also
its own big deal. When it's Microsoft also said it's making athropics latest cloud SANA models
available to M365 co-pilot users, the service has previously relied only on OpenAI's GPT models.
That in itself is kind of a big deal. So, that goes now, we're tying it back to the Friday thing you
saw. So, this is all making sense. But also important, this is a launch, comes weeks after
they're in the African-Jewish new tools for cloud, that intensified investor concerns about the
threat AI agents could pose to traditional software companies, triggering to a sell-off in the sector.
Microsoft's own shares fell nearly 9% in February. I didn't realize that. That's a significant,
you know, that's a significant amount. And in the last part here, this move, then Microsoft
deepens ties. This is a lot of us are referring to. Investors have questions. It's Dependents on OpenAI,
which accounts for nearly 45% of Microsoft's cloud business contract backing. A backlog, excuse me.
Not that we haven't seen this over the time, and that Microsoft has never really said we're
exclusive with OpenAI, but I think this obviously makes sense. And look, the one time I really
tried to get into the co-work that particular day, it was like two weeks ago. It was just down,
it was just having a rough day. It was having a Monday. And so I couldn't really use it. And
unfortunately, because I was going to teach in my class, and I just switched very quickly the code,
and use that instead and said, well, look, co-work is similar to this with some more niceties to it
and stuff like that. But I love this idea. I love this idea of collaboration and using a tool like
co-work. I really love it. I mean, this is what I think companies should really do is they should
hire somebody to come in who really understands this and says, okay, this guy, this girl's here,
and they're going to go department by department, person by person, you know, within reason,
and they're going to collect skills. They know how to do it. They know how to write the MDs. They
know how to organize it. They know how to put projects together. They know how to do all the things.
And we're going to bring this person in. They understand, clog, co-work, and we're going to have them
not disrupt anything that's currently going on, because that's kind of the nice thing, right? Like,
you don't have to, you're not switching from Salesforce to HubSpot or vice versa, right? Where
there's going to be a fact, you know, or some other, you know, whatever. This is something that
somebody could literally come in. I mean, you can get somebody internal, too, obviously, but somebody
that comes in and just literally does do that. They go, they know how to ask good questions. They
have no one to add a under, you know, uncover skills. They're not going to write the MD documents or
at least work with AI to write the documents. They understand, you know, hierarchy, permission,
control, all the things. And you don't have to actually, your business will just keep going. Like,
if like this person can work independent of that to the point where they will start rolling out
and go, hey, I think I have, I think it's doing a thing now. And so there's, okay, show me,
you know, like the executives go, show me what it can do. You go, well, it has all the skills now
of marketing. It has all the skills of whatever. This is now what it can do for you. And you just
start standing that up for the individual, individual department. So anyway, I thought it was a cool
story. And I think the idea that they're even allowing sign up models in there. But again,
it sounds like they're maybe just waiting for lawyers to clear all of things, which don't play
from there. And so again, it's kind of a, it's kind of a cool story to say the least on there.
So that was the big one. That thought was really cool. Chad has your back. Gareth, like absolutely,
this sounds like an important service to be offering internally or bringing someone else
externally, right? So I mean, my take would always be bringing somebody in externally to do that.
Internally, it's tough. Bring a contractor in. Bring somebody in who's just going to do that
work because they're not going to have the emotional ties. And you know, it's just easier. It
just is. It's like it's hard when you're, when you're part of it, you know, the legacy and the,
and all the junk that comes with being part of a company, it often does, I'm not saying it's
impossible, obviously, but it oftentimes doesn't lead to somebody being able to do the work that
needs to be done with electing skills because they have ties they had, they came from somewhere,
there's somebody's coworker, whatever the case may be as opposed to somebody coming in. That's
not to say you can't like, I always say this, this comes with, oh boy, if you really had somebody
do that, I'm sure there's companies that do this. But you really got to be careful with your top
down, um, messaging, messaging because it could definitely feel like office space, the movie,
incoming in and being like, well, well, Todd, what do you do? You know, suddenly your day is like,
you know, which immediately sounds like you're going to take and eliminate my job. So people are
actually defensive of that, you know, tricky. That's the challenge with the thing being name
skills, right? Like, no, no, no, no, we're not trying to take your job. We're actually just trying
to make, uh, we're just trying to understand all of the skills that you use while
deploying your job so we could make your job easier. And if you don't have a good relationship
with the top down messaging, that would be a hard trust, I would think, or at least you'd want
more reassurance. Or you go, here's the, here's a factoid that's related to that. Uh, another
greater news, challenger data shows that tech layoffs in early 2026. So just in the past few
months have jumped 51% year over year. Yeah. Yeah. I mean, and I needed a recent report that I read
was that, um, although there hasn't been, if you look pre to post AI, um, in unemployment in
some areas, it's, it's not what maybe some would suspect that AI's had this, um, um, impact in
that way. But there is the one place they, the experts feel confident is with, um,
new hires like young, young people coming into it. Yeah. And level and true level position,
to our way down, you know, yeah. Yeah. Yeah. And I could think of the term entry level. So
thank you for saving me there, Andy. Uh, but, uh, entry level positions and not not,
obviously, all industry, but in certain places, obviously, um, there's less hiring going on
there than there was previously, because a lot of companies obviously are trying to figure out
whether, um, that's really valuable to the company. If you bring on AI, typically what we're
seeing is, no, it's actually the people with the expertise are the best with the AI, because
there's a lot of the AI I can't do and then the mix could potentially be good, um, which again,
if we're going to we can go through the rest of the news, but that kind of goes into the, uh,
other story that I wanted to talk about as well. But, um, what else? Did you guys have any other
news? You guys wanted to, um, hit for over the weekend? It wasn't a time. I mean, it wasn't like,
it wasn't a banner week. I want to weave off of what we were just talking about, which is in
general disruption of our, uh, you know, systems and economy by AI. Uh, and note that, uh, over
the weekend, there was publication of a podcast that included Liz Reed, who is Google's VP and
the head of all search. Now, you know, we've wondered whether, you know, Google's search
monopoly, you know, like Google it. That's what we mean when we don't want to look for something,
is being eroded by the, you know, inclusion of AI. And for me personally, yeah, I rarely go to
Google anymore. I go to perplexity first for any kind of thing that I might have previously googled.
So now perplexity is my answer engine. And so yeah, I've, I've taken my business away from Google
in that respect. But Liz Reed, who's responsible for search, uh, you know, it has a very different
perspective on it. And obviously better data about what's happening under the hood at Google.
And so she points out several things. One is that search volume overall is still growing,
right? So people are using the multimodal AI capabilities within Google search and their search
is growing. And Gemini is a separate kind of thing. And there's this, what she says is an uncertain
boundary between what Gemini is and what Google search is. But she points out a couple of things
that I think are kind of interesting to put into the equation. One is that multimodal AI and Google
has among the best of multimodal AI in terms of the ability to interpret video and audio.
That allows Google search indexes to index audio and video at a depth not previously possible.
So now when you search on Google, and she's responsible for search, the index is much larger
because it can pull up something that's from inside a YouTube video or any other video that Google
has access to that it can interpret and add to its index. Remember that search is all about
generating the index, right? You crawl everything on the web, you index it for fast retrieval,
and that's what a search engine is. Okay, so there, she says in some spaces, the search and the AI
are converging and sort of getting conflated. In other ways, they're diverging. And then she makes
a very important point that we should all remember, which is agents will eventually become the primary
users of the web. Now what does that mean? Well, that means that, you know, there's an opportunity for
Google to provide the agent that is going to be the service agent for search, by which we mean
queries that you have as an individual. And also that she gave an example of a subscription
aware search result, which is customized and personalized by your agent. That is, it's going to
surface content from subscriptions you have. Like Beth, I know I envy you, your subscription to every,
right? I wish I had an every subscription. So I could get access to all the tools and services
that every provides. Well, if you have that, then when you go to Google and you ask your, or you just
ask your Google agent on your device, whether it's an Android or Apple device, and you say, I want this,
well, it's going to know that your app of the subscription has access to those and it's going to
retrieve those things. So she has a, I think she has a long-term vision that Google's in a strong
position because of the expansion of their index and the ability of the Gemini platform to express
agents that are personal and customized. There's a future that doesn't really wipe Google search
dominance off the map. Yeah. Makes sense to me. I mean, when you look, yeah, just the YouTube alone
is a vast, vast resource of knowledge and stuff like that. So it's a shame, you know, Google,
if you remember years ago, they, it wasn't Google one. I honestly don't remember who it was.
Remember, there was a, there was a, there was a Facebook competitor by Google,
Google chat, Google something. I had an account. I had it back when I had the Crossage and I had
an account and it was social, it was social and sharing and you connected with people. It was Facebook
like. I vaguely remember. Yeah. Yeah. And you know, they, they, sunsetted it because it just,
you know, didn't have the, have the grasp or whatever. But, you know, it's, it's, I don't know,
the shame, that's the wrong word. But, you know, you could imagine there's something like that had stuck
where Google's position would be even stronger today. You know, and I'm sure there's also
companies that I don't even know that Google owns, honestly, that are probably huge data resources
to, you know, to your point Andy. So, and that goes all back to your, your story about, you know,
Web MCP, right? About, you know, making it so in Google's leading that charge to make sure that
not only people, but agents, AI can effectively see what's going on at a website. You know,
when I think that's something, or a lot of people will be thinking about it. Certainly something,
as I'm standing up our website, it's not my top thing that I'm worried about. But one of my checklist
things is to ensure that I'm using whatever the latest is on that, to ensure that, you know,
if I'm going to develop a website that it is crawlable for lack of a manager. I don't know what
the right word is there, but that it's, that it's compatible with whatever Web MCP would be.
Because it's in my best interest if I'm going to set up a website and a company or whatever
that I'm doing everything I can to make sure that agents can can read and understand what's
on that website, you know. So, it's really interesting. And it's interesting because it feels like
everything that everything old is new again because SEO was dying. But now GEO actually looks a lot
like SEO a bunch of years ago. For a variety of reasons and you have to do some, some fairly simple
tweaks in the way that you present data. But I feel like we're back to the place of, oh, yeah,
there are opportunities to get found for definitely long tail keywords, but also for you main keywords.
It's funny you say everything old is new again because I literally said that as a comment to
somebody on LinkedIn in the last couple days and it's been sitting in my brain. So it's just
funny that you use that term. It's these days I feel a lot. You said, and then I want you to lead us
into conversation, but you said Bob's your uncle the other day and oh, it's like, wow.
See now you're just in my brain today because let me tell you something.
I don't know where I've obviously heard that term before. That's saying before and I phrase
before and it got stuck in my head. Then I couldn't stop saying it. And then I went through a period
in the last two weeks where I said it more than any time in the last decade of my life, right?
It just kept coming out and I said it to Amanda and Amanda is like interesting and I was like, no,
I'm in a face. Amanda Bob's your uncle face right now. You have to know this. We live together.
You have to know that like, I'm working through this and I can't stop it is what I'm trying to
tell you and that is just the thing. And it reminded me I joked with her. She wasn't, I'm in a
newer in college, but she doesn't remember this particular story, but I had a roommate who dated a
girl who's dating a girl and he turned to me and the other roommates. There was three others
of us at that point. This would be like, stop origin your year. And my buddy says, hey,
she's pretty religious, you know, I mean, we're at UGA, right? That's not uncommon, right? And
she's not not really a fan of a lot of cursing if we could just keep that in mind.
Well, you must have told me to turn into a sailor because I couldn't stop cursing. I have a
curse more in that week around her. All right. It was the worst. And he would look at me like,
I literally said the one thing. I was like, I can't stop it. I can't stop. I'm not a control.
I'm just a cursing sailor. And I apologize, but it's like the one thing you told me not to do,
it's the only thing I can think of. And so I've had a Bob Drunkle issue lately. And I like the phrase
and I need to say it less, but that's funny that you bring it up because I literally have that
conversation. I mean, it was standing right here and I was joking what they're about or
saying. I was having a problem that I couldn't that I couldn't stop doing that. Greg says cursing
is good. Well, Greg, become a firefighter. That's my after my I suggested to you and you can curse
as much as you want. He is filthy as you want as a firefighter as it turns out. At least that was my
experience. Yep. And to Andy's point, I asked perplexity where what what the history of Bob's
your uncle was. I did not go to Google. I went to perplexity. It is not because I don't
know. There it's attributed to I will give it to you in the in the community because I'm
going to butcher that's really badly. But I will post it in the community. Okay, so a little side
tangent there, but it's so funny, Beth, you said two things side to side. Oh my god, it's like
sitting in their couch. See, we've known each other for a long time now. So that doesn't
bound to happen, you know. Okay, we don't have a lot of time, but let me just bring this up every
quick. So March 5th. So last week, not very old. There was a hard, hard business review. Forgive
me if you guys did talk about this on show because I think I did. I was out one day. I don't think so.
So it's just called when using AI leads to brain fry is what it's about. And new study finds
there's certain patterns of AI use are driving cognitive fatigue while others can help reduce
burnout. And there's a series of people now. They did roughly 1500 people. They they they
conducted a study of 1500 full-time US based workers, 48% male, 51% female, 58% independent
contributors and versus 41% leaders. That's what they give you is the breakdown. And honestly,
it's just a good conversation about what they're calling AI brain fry. I says, which we define as
quote, mental fatigue from excessive use or oversight of AI tools beyond beyond one's cognitive
capacity. So they're saying necessarily burnout. And if we were to look at burnout, we might look
at that as more traditionally, you know, negative feelings towards work, reduced cognitive load,
being tired. I can say pretty confidently that I reached burnout as a crossfit when I owned
the gym. And I was still doing all the classes. And I kind of realized, someone's got to change.
I can't teach everything and run the business. I was just I was burnt out. And I didn't enjoy it.
I was in a bad mood half the time. And I was like, this is this is what this is, right? So that's
not what they're talking about. What I think is really interesting is they're specifically talking
about this idea that we're, you know, people managing agents and that somehow this might
free us to do the things we want to do we like to do more, whatever that might be, whatever your
role is, whatever your job is, the things you like to do more, you know, it's sales that would be
like the reps do not like putting the stuff into the CRMs or Salesforce. That's just, you know,
boring tasks, but it has to be done. They do not like doing the research. They like doing the
human things that sales like talking to people selling the artist selling. That's what they got
into that profession for, not all the minutiae, right? So, okay, great. AI can take it over.
But as we ratchet up here and more people are talking about, you know, perhaps
managing six concurrent in parallel, you know, clogged code sessions or something like that. I'm
not at that level yet, but some people are. And that's really where we think maybe works going.
And people just talking about how fatiguing that is cognitive fatigue that is. And I would say that
I've definitely felt that after like maybe a long day of working with clogged code. I've definitely
had that feeling at the other day of like, man, I am exhausted, but not physically. I'm mentally
exhausted. Like I like I did this. I strained my eyes for too long, you know, or before I had these
glasses, these are my like, you know, sometimes I have them on on the show, but it's my it's my
computer glasses because a long time ago that the eye doctor was like, do you get more headaches
because I think you just strained to look at the the screen. You should just wear these glasses
and life would be good. And he was right. I was less definitely less tired at the end of the day.
It was because I was straining all the time. And so I just think that this is something that we're
going to be talking about a lot. I don't really have a long discussion about today unless you guys
have something to add, but it's interesting that there's do that we're doing research about this.
It's interesting that people are starting to apply a term like AI brain fry, whether that sticks,
whether that becomes fuzzy or not. I don't know. But it's a real thing. And I think, you know,
we're only going to get to a point where there's more and more and more parallel tasks happening
at any one time. And I don't know if the problem then becomes, oh, because I'm now in this managerial
state where I'm always managing and responsible for outcomes versus if I was used to be traditional
coder. It was about writing the code. And now we know that like meta gives rewards out for how
much code was written by AI. They actually track the tokens and have a reward structure part of
that. What that is, I don't know. I've just read about that. That that actually can actually lead
to people feeling overworked when the whole idea was that AI could take, you know, the goal was
that maybe AI could take some of this work off your plate. But what happens when AI can just
always outpace the human brain, which is kind of what people talk about. And they're like, when
something could clearly overrun that human brain a hundred times over, what do you, what are you
supposed to do? Yeah. I want to give a couple of personal examples around it because I've
experienced brain fry. And one of the ways that brain fry happens is, you know, I pose a, you know,
a new task to GenSpark, for example, and it comes back with five pages of really good,
comprehensive coverage of everything I need to know about that. But it's like, oh my god,
I've got to read all this now. And it doesn't, and I don't want to be so lame that I have to say,
okay, now give me give me a bullet point summary of that, please, because I want to make sure that
I capture everything that it does. So there's, this is to your point, Brian, that the, you know,
the AI outpaces your ability to keep up with it. And you can see that in the coding context very
clearly as well when it kind of says, okay, here's the, here's the code, basically, you know, tell me
if there are, you know, and here's a diff, you know, tell me if there's anything you want to correct
and I'm like, I'm not sure I can actually absorb that. Yeah. The other one is just that
when you're managing cloud code or, and when I'm managing cloud code, you know, my, my, you
know, longer term memory about what the context of this ongoing conversation is with cloud code,
where there are tangents, right? Okay, I'm working on this phase of this project and we've made
progress to this point. I can't exactly remember where we got branched off on this tangent,
but this tangent is taking me over here to fix something that I wanted to make sure got fixed.
And now I, okay, whoo, I got that part of it done. And, and that required all of that to that
point required continuous attention. Yeah, I'm going to go get something to drink or eat. And I
come back and I don't have time or the mental, you know, capacity to kind of refresh my memory
because my short-term memory is now dumped. And I never committed all of that to long-term.
So I, I have to get back in context on a very complex thread. That is causing me not to want to
get up and go away ever. Right? So I'm trapped in the conversation with the AI. Yeah.
There's an order to maintain my context with it. Yes. And I can do that. I can stay abreast of it.
And I can manage it. But I find myself sometimes late at night. And this is what the research is
showing is that, you know, AI and working with agents this way as just a, as the bar shell or the,
or the orchestrator or the, or the coach to the AI, that doesn't save you time. It commits you to
that role on a long-term basis. And I find myself late at night going, oh, man, I need to get to
a stopping point here, but I can't see one emerging. And so, yeah, I, I spend regular,
varying the terminal. No, no, I think that's exactly the way I felt sometimes is that it is very
hard to determine where the end is at any given point when I'm working with clogged code.
You don't really know when the thing will be finally fixed. You know what, you know what great
is going to look like. You know what you're striving for. But because I'm not writing the code,
I can't really say, if I put two more hours into this, I'll be there. That's not what's happening.
I'm working with something that's doing the work itself. And I'm managing it. But I am reliant on
it to finally get to the point where it's done. And I think Andy, I've definitely felt scenarios where,
you know, I didn't want to walk away because I was thinking, well, I don't think we're more than,
like, I probably have about another 30 minutes of this. So I can do 30 more minutes, you know. But
then 30 minutes later, you know, you really know where you, they progress, but you still don't
really know where that finish line is. You're just running a race and you're like, there's no track.
I have no idea when the finish line is going to arrive or not, which does give that sense of,
you know, wanting to hang out and wait it out and wait it out. And I've definitely sat at a
computer too long. And then eventually gone club code, you know, like write me a good wrap-up document.
Well, I'm literally telling you we're wrapping this up right now as far as like, you know, whatever.
And then of course it doesn't get a care. It doesn't have an opinion on it. It goes, okay, sure.
I'll write a big prompt and, you know, I'll put a document in your file in your folder. And
it'll tell us where we left off, where we are right now, you know, in the process. And it's like,
okay, I could have done that two hours ago and walked away. But I thought we were there. And then
the thrill of seeing that thing come to life, like that thing you've been working on,
you're editing, whatever, to do it is, it's tough. And I agree with you. And I think, great,
I see, I'm going to make sure I say it right, who say go per walk. It's a reef. I go per walk.
Greg, did some go for me. See, I, I, I, I, I, I, I, I can guess myself,
we're going to have that might, might make me worth. Uh, if the Greg said, uh,
so maybe go for a walk. And I think, Greg, it's like, wait, hold on. He started. I find it,
I put it on the screen, Greg, but it's like, you just pours their fast. I can't go.
Anyway, my point to it was, Greg said, so maybe go for a walk. And, and right now, Greg, I think
you're, you're right. I know what you're saying, by the way, Greg, Greg watches this all the time.
And yeah, it contributes. But the fear here is the slippery slip, you know, today I, I,
I am walking away from my computer, but I, although I don't have these notifications on,
we know that there's such thing as like the remote part of your phone. And tomorrow it'll be
heads up displays on my glasses when I put my sunglasses on. And just like I had to eventually,
I think a lot of people have done this, remove the notification little numbers. I'm one of those
people that can't see the little numbers in the upper right corner of an app, because it, that says
like six, seven, 10, whatever, like there's this, it was ruining my life. And once I turn that
crap off, and I don't have to see the notifications on. And my wife can, he can say 1572,
unread emails, does not affect her. Me, it like ruins my life, right? Like I can't, I can't deal
with that. And so I had to turn that, I had to turn those notifications off like I better again.
But I do think there's a creep here where today it's the computer. Tomorrow, Andy, you will walk
away from the computer and go enough. And you'll get some notification without even meaning to
just before you're about to go do something else that says, hey, breakthrough, we're almost there.
You should come on back to the computer. Do you want me to go ahead and hit this, this next update
or whatever it is? Or you know, push and you're like, well, I can't see it on my watch. I got to go
back to the damn computer to see what the, what the thing was because you know, so like I do think
there's this weird, we should be cautious because we already do this with social media. We already
do this with too many just in time notifications or whatever, you know, the 24 hour news cycle.
Well, is there or is 90% of it just rehashing the same shit from last last hour? Well, that's the
way it often feels to me, you know, because or ESPN or anything that has to be constantly on. There's
got to be something to talk about. And so that's my fear here is that it just creeps into that,
you know, well, what happened here? Well, I used to I used to be able to take breaks. I used to be
able to go off for walks and just enjoy outside. And now I feel like the agents ping me. The second
something needs my attention again, it draws me right back in. So I'm looking at it on lunch. I'm
looking at it after hours. I'm looking at it. And these will be choices. I'm just saying it's
slippery, right? You find yourself where not meaning to be there. Okay, there. I want to share my
experience here too, because I may be a little different than the two of you. I have an hour commute
into a meeting that I go to once a week. And I ran a remote control cloud code during that process.
So that was continuing. Well, I was at a red light, right? I'm going to look. Yes, I wasn't
looking at it. Well, I was driving. Thank you. Don't don't do the things. But I was able to
alexa stop. We can't hear. I don't know what that was about. I did not say your wake word. But
five didn't come through. It didn't come through that. Okay, um, uh, totally got me off track. No,
so, uh, so I could the entire process ran while I was driving to the meeting while I was on
break from the meeting while I was driving home from the meeting when I got home, it had completed.
And that felt like an amazing situation. But what, uh, what was happening was it wasn't like,
uh, I've, I've pinged you and now you need to make a decision immediately. It felt very much like,
okay, though we can just continue this process. But I have also worked really hard on what I am
calling the colleague layer. I will teach it probably under a different thing. But where, where the
way that I work with AI, specifically cloud code, is being compounded over time. So after working
together, there's an observation file that gets written, um, so that those kinds of pieces can be
better for me, right? Like, nope, don't give me a wall of text. Uh, so give me, uh, so, so as that
learns me, um, getting, uh, I'm getting like a summary. Like, here's what you need to know about
what just happened. Happy to talk more about any of the pieces. Yeah. Which is helpful. I mean,
it's a look, I Greg saying like, Brian, you're in control. It's all you. Yeah, I agree. I agree.
But I'm also in control of all social media. I'm also in control of everything I eat on control,
you know, doesn't mean it doesn't mean that I, that I'm stellar at any of the above, you know,
like some things I do really well with. Uh, I don't have like, I wouldn't call that I had like a
TV addiction at all. I don't like, that's not me, but some people do, right? Like my wife has to,
has to monitor for herself. That's her opinion, you know, of how much she watches, because she can
get sucked into TV. I can't really sit still and watch TV for more than about an hour, maybe a movie
I can do too. And then I'm out. And she can, she, if, if there's a good series on, and she's got nothing
in her way, like, this is the weekend she's chilling, she can, she can bulk watch a season or
whatever. And she's like, I hate that. I hate that I can, I can zone out like that. So she puts
in premerge. So I totally get it. I, I think we all have our things. And some people will have a
harder time than not. But when I tell you that we will absolutely have agent swarm addictions
in the future, this is going to be a real thing. It's got to be, if it fits all the criteria
of the can't get away from it, like the dopamine hits, love when something goes through and gives
you a tada, you know, whatever it is, if that's what, if that's what gets you going, if that's what
excites you to see that next thing happen as you're building or whatever, then it can become an
addiction. It could become something that causes fatigue, because it's like everything else that we
have that we have easy access to. So to your point, Beth, for all the, all the cool stuff about, you
know, being able to commute and literally have something working on your behalf while you're not
there, the best. But it comes with all the other stuff too. And I think we'll just hear more about
this. This is a 1500 person story, small by most standards, but, you know, it's one of probably many
that are going to come out. And it's definitely something. We'll have to keep talking about on the show
and, and hope that people can find, you know, before we get to a point where it's a problem or it
causes burnout or other things, people can identify it first and foremost and go, this is a thing.
I have a problem with this. I like it a little too much or whatever. And here's what I, here's
what my adjustments, adjustments should be. So, you know, if no other reason, just talking about
on the show, hopefully is good for other people too. So, all right. Well, let's see anything, Greg,
Brian, you're perfect just the way you are. Thank you. Thank you, Greg.
I was looking to see if there's anything else before we go. Now, okay.
Let's, well, sometimes it's like last minute comments, you just want to bring it there on the
topic or whatever. All right. Well, let's wrap it up for today. All good stuff. There's a good
conversation. Love the kick off there, Andy, with the, you know, bio, bio computers and neurons.
And that's really, really cool. We'll be back tomorrow on the rest of the week with a different
players along the way as always. So, be sure you keep coming back every weekday at 10 a.m.
to join us. If you want to join us live in the chat, during the show, you're more than welcome
to do that at 10 a.m. Eastern. We recommend going over to YouTube to do that. So, you can be with Greg
and Jude and all the other people that are in our chat right now. You can say hello to them.
They're very welcoming, friendly bunch. So, you can do that as well. All right. We will see you guys
in the next video.

