Loading...
Loading...

Welcome back to episode 304 of the Block Runner Podcast.
As always, we have your host William talking with your co-host Iman as we discuss cryptocurrency
developments while we make this new technology relatable to you.
You can watch the full episode on the YouTube channel and stay up to date by subscribing
to our newsletter at theblockrunner.com.
Here are some of the topics they discussed today.
First up, AI agents are everywhere and how they're refactoring daily life at a societal
level.
Next, the Viral Doomsday article predicting mass unemployment by 2028 and why it struck
such a nerve.
Then, Bitcoin's exponential decay meets AI's exponential growth and what that collision
means.
And finally, the crypto influencer pivot to AI and who's leading the charge.
All right, let's listen in.
Back to another episode of the Block Runner Podcast on your host William, always here with
your co-host Iman.
Yeah, what up, dude?
On the six we got TJ.
Hello.
All right, dude.
Another podcast, another week or more agent stuff.
And we got agents oozing on our pores, basically.
Just how I like it.
Oh, God, you're sick.
You're sick, man, dude.
Yeah, so because of that, it's, I don't know, it's refactoring a lot of our perceptions
on things.
Yeah.
Yeah, I think we're definitely hitting some sort of inflection point.
Not just us personally, but like society, societal was like these, these, these, these narratives
in these like a fairy tale projections, or you know, of like AI is going to really change
everything.
Yeah.
Like now it's like hitting everybody I think all at once because I'm seeing like a mass
scale, like pivot in a sense, like even from like, you know, because I typically tend
to pay attention to the balance of crypto, right?
Yeah.
I mean, that's what we do here, right?
Yeah, yeah.
So I still remember those days, yeah, they're very enjoyable.
They're fun memories.
Yeah.
So some of the bigger influencer types are starting to like full on pivot and panic in
such, in light of AI, I guess more specifically agentic AI and such, you know, and I might
have something to do with like that article that painted that Doomsday scenario.
Yeah.
Citriona.
Citriona.
It's not Citriona.
It's not Citriona.
Citriona.
Citriona something.
Well, anyway, in the article, they explain how in about two years, the world is going
to change.
Citriona.
Citriona.
Citriona.
There you go.
You're very close, dude.
Yeah.
So in June, 2028, so like the whole article was like a prediction piece, it was, it was not
a prediction piece.
It was more like, I came back from, yeah, from time from June, 2028, yeah, which is not
that far.
Like, you didn't really go that far back, dude.
But that's kind of the shock factor, I think, is like, hold on, he only time traveled two
years.
So we invented time travel in two years.
That's pretty cool.
But nobody paid attention to that.
No, no, no.
That wasn't what spooked everybody.
It was the unemployment rate.
It was the fact that the S&P was down like 40% from its highs or whatever.
Yeah.
And the main culprit to it all was, of course, AI, right, the big bad, yeah, or potentially
what's becoming the big bad wolf, like from a public perception standpoint.
Yeah.
Yeah.
So why did this strike system ever, because I'm looking at it right now, it has like 30 million
impressions or some shit, 28 million.
Yeah.
Yeah.
Just why?
Because we read it and it's like, yeah, this isn't anything that drastically, no, new,
no.
I mean, I felt like it's pretty obvious.
I think, I think what's shocking is probably the 2028 date, because everyone, you know,
the singularity, you know, all that stuff, 2029, yeah, like it's so or so in the future,
but at least it was so far into the future.
Yeah.
But 2028 is like two years, guys, that's the next having, that's right.
And as we know, two years goes by in a flash.
It does, but the thing with Bitcoin, in addition to AI, they both operate on exponentials.
Right.
Bitcoin operates on a decay, exponential decay.
Yes.
Right.
And so AI operates on exponential growth, right, exponential development.
And so two years is actually more like 10 years.
And so yeah, I can see why, you know, two years, people will be shocked at the unemployment
rate.
Yeah.
So I think that's what it is.
It's the framing of this message that we've all heard a million times already.
Yeah.
It's like, you know, because I think you can visualize it a little bit more than you could
before because there's like a date set to it now.
So I get you thinking it's like, holy shit, am I prepared?
It's like, I only have two years, basically, to like, yeah, permanent.
And underclass.
Yeah.
Position myself, what do I need to do?
Doomer.
You're scared of me.
You're like, tell me, where's the skateboard or whatever, right?
Yeah.
You know, where's the bug out location?
Yeah.
Just install open-claw, dude.
That's it.
That's it.
Just talk to your bot and you'll be okay.
Yeah.
I mean, theoretically, this bot will do your work, right?
Yes.
See, now we're getting to the inflection point for ourselves personally, right?
Because we're actually already kind of like in the process of that.
We see the value of these new agentic models and we know how to apply them to make ourselves
much more efficient.
And we know.
Yeah.
So what do you think?
If everyone has agents, does nobody have agents?
No.
No.
It's sort of like if everybody has the ability to spin up a company, does that mean companies
are like saturated?
Definitely not.
No, right?
Yeah.
Yeah, just because you spin up a company doesn't mean you're going to be successful.
Yeah.
Right?
So just because you have an agent or just because you feel like everyone else has agents
doesn't mean like the opportunity is over.
Right.
So then what is...
So what's the difference between like success and no success even though you have an agent?
I think it's like probably applying, I don't know, for lack of a better word, first principles
to a problem.
Oh, that word.
God damn it, dude.
It's the best.
Now you got to explain first principles again.
Go for it.
First principles is how everyone should think, first of all.
Okay.
And it is a methodology of breaking down a problem into its basic components.
So say for example, you're trying to launch a rocket into space, you need materials,
right?
You need to form these materials in a way that is consistent with physics.
And so it's like tubular.
You have fuel and the materials sustain friction through air resistance and all that heat.
And all that stuff can be broken down into base materials, raw materials like steel.
Right.
Right.
But if you go to Russia and you try to buy one of these rockets, they'll charge you
like $20 million for something that usually would cost like less than $2 million.
And so that long story short, that's how Elon started SpaceX.
Okay.
So yeah, so unfortunately we can't just like download Elon's first principles MD file.
It just like slap it on all of our, he needs an agent and we need to access his MD file
to soul MD.
I suppose.
But I guess you could actually inject that into like the actual soul MD file over your
body.
Yes.
Approach everything.
First person.
That's the first thing I did, dude.
Yeah.
Well then step one.
This is my point.
Okay.
So what I'm observing is, you know, we're actually utilizing these new technologies and
these new tools pretty effectively.
So I feel like, you know, we need to broadcast that in some new format or way, right?
Other than us just talking about it, we need to like show more of this stuff.
Like it's the equivalent of, you know, we've had a YouTube channel for the last seven years.
And we deeply understand some new crypto primitive or some new, whatever, right?
Some new technology that enters the crypto space and we're doing everything we can to digest
it.
In most cases, actually apply into the things we build.
Sure.
And therefore we're relaying to all you guys watching and listening, you know, the usefulness,
the applicability of it.
And you know, therefore we're transmitting like the value opportunity of this thing.
Like and we've done that pretty effectively, I'd say, I mean, I wish we had like an actual
way to display all of our banger hits.
So you're saying we maxed out our crypto skill, like there's nothing else gained.
No.
Well, that depends on crypto as an industry itself, right?
Yeah.
It feels like we've topped, but that's not our fault.
Because there's not much to like really soak in anymore.
Yeah.
And just like the progress is really slowed out.
Yeah.
It's all about stable coins.
Yeah.
And like how much can you say there, dude?
It's like, yeah, they're stable.
That's like, I heard, they're pretty stable out there.
You know, the dollars, you know, even though it's in flux every now and then, you know.
Yeah.
But yeah, like this AI stuff, though, it's like, well, what is the formatting version
of that?
Or it's like, okay, we're also consuming and digesting this whole new vector of reality.
It's not just crypto.
It's AI.
And then we're doing the same things where it's like reforming the deep understanding, identifying
the value and applying it to the things we are building.
So it's like, how do we best relay this to our audience?
I have to remember that not everyone spends a lot of time researching, like I do.
No.
And because it's boring, dude, you know, excuse me.
Yeah.
The new slash, dude, I rather be on TikTok or watching, you know, some sporting event.
Unbelievable.
But I still want the nugget of value that you've.
You still want the agent like producing TikToks for you?
Please.
Right?
Tell me how to do that.
Yeah.
Yeah.
So I actually, you know, enjoy doing research and understanding, like, I don't know why.
I guess it's like it just came from like an early thing of like once you learn something,
it's like an RPG.
You sort of gain that skill and it's like, okay, I want to improve that skill, right?
So I go deeper.
Yeah.
So I don't know.
I've tied that dopamine kind of, you know, reflexivity to learning new stuff.
But anyway, so I spend a lot of time looking into this AI stuff, but I would imagine that
a lot of people do not do that.
And so the best way to do that is to talk about what it is that we're implementing.
Yeah.
And how it actually has benefited us as a business.
And you know, what benefits do you gain when implementing an agentic workforce in your
organization?
Like what?
Yeah.
What does it look like?
It's not obvious.
Yeah.
It's not clear.
Yeah.
I think a lot of people just, you're soaking in headlines of like people claiming that AI is
helping them.
Yeah.
In so many different ways.
Yeah.
You got to read that with a grain of salt, right?
Not all of them are generating $10,000 per week.
And even if one has, that's a one-off situation, right, right, right.
So this reminds me like this parallel, and I remember when social media came out, there
was this like understanding within the people using social media that if you didn't have
a social media account, you essentially didn't exist.
I remember that.
And so businesses had to level up and get social media accounts.
Yeah.
And so this is probably the same thing all over again.
Like if you're a business, you need to be running these things.
Yes.
Otherwise, your competitors are going to 10X your progress overnight, literally.
Pretty much.
Yeah.
So again, we have something to offer in this respect, because we lost a soul, not too
recently.
Actually, very recently.
From a business perspective, not like from a death perspective.
Yeah, that's what I mean.
Yeah.
We lost a real human soul.
And it was a pretty important one, right, without this soul, like this podcast or this
channel where the car just fell out of the machine.
Yeah.
So it was like, shit, what do we do?
We got to generate a new cog.
Yeah, but we can't birth humans.
Yeah.
We got to spend 20 years feeding this lad.
Yeah, no way.
No way.
Or, you know, you got to have money around to pay somebody.
Don't have that either.
Yeah.
So it's like, what are we going to do?
It's like, oh, shit.
You know, these AI agents, they do a lot of things.
Yeah.
So let's find out if we can architect like a workflow, a process, whatever, because of
these agents and the ability for them to spawn sub agents and like, you know, orchestrate
very complex tasks.
So we do that and then synchronize that to our, what we do here, yeah, our, our input.
This is the input.
Yeah.
What's interesting is that it, that sort of thinking is not too different before we had
AI, because I remember when we first started this, it'd be like, yeah, let's make a YouTube
video.
And then from that YouTube video, we're going to make an article.
And then from that article, it's a tweet thread.
And from that tweet thread, we're going to create, you know, like little videos that kind
of explain the whole thing.
And it's like, we had one output is generating five different things.
I was like, well, we need a human on each, like, limb here.
Yeah.
And then then we can do it.
And so now it's like, you're doing the same thing, but an agent is sort of working
through the entire process.
Yeah.
But imagine if we actually did, because yeah, one of the things we figured out is through
the loss of this soul, you know, rip, you know, is actually how inefficient we were
with like our, yeah, you know, putting things together and then like actually producing
useful outputs and all these potential channels, right?
Like basically, we were limited by the bandwidths of human, human mental capacity.
Yeah.
And that's very, very well, right?
From human to human.
Yeah.
It's a contingent on desires.
And it's like, if you're sad, you're just not producing.
Yeah.
You just don't feel like working today.
Yeah.
It's like, fuck, I just don't, I'm not vibing with them.
Yeah.
So.
Totally unacceptable.
Guess what?
It was an article that doesn't get written and posted or a short doesn't get generated
and, you know, just, therefore, the machine stops or it's less active.
So we're like, damn, well, if it can do the things that were already happening, what other
things can it do?
Mm-hmm.
And you start to identify all these potential channels of value we could be tapping into.
Yeah.
So weird how much lack of content in respect with like the ability to leverage these agents
to produce all this content versus what you're actually producing.
Once you enable this agent, you're producing 10 times as much as you were before.
Yeah.
That's the real shocker.
And then you start to think about like, hold on.
So it's like, we were paying this soul, this amount of money to do this amount of stuff.
And then we just added all this new stuff on top of that.
So it's like, how much more money is that like if we were to bring in other souls to
do this work?
And then you start to like, we just saved like $20,000 a month, basically, in labor by
applying these AI workflows and systems.
So that's, I think, a big deal.
And it's like the shadow aspect of like AI.
It's like, it's not like crypto where it's so much surface, more surfaceable, like
the value of crypto, right?
Like you can just buy a token, watch it go up.
And I get it.
I know why I want to get involved in that.
That was my onboarding experience.
I literally bought a coin in 2014.
Well, Bitcoin.
Bitcoin.
Bitcoin.
Bitcoin.
I bought Bitcoin 2014, forgot about it for three years, picked up my app, and I saw
like a 10X ROI.
And that was it.
That's all I needed to see.
It's like, okay, I'm dedicating my soul to this industry moving forward.
That's right.
Yeah.
But AI, it's not that obvious, right?
We're like, what's the 10X vertical, like if you're telling me to get a clawed bot and
all this stuff, you're telling me to learn all these different models and stuff.
Right.
But where's my ROI?
Yeah.
Where's my 10X?
Yeah, how does my bank account change if I install a clawed bot?
Yeah, it's not that obvious.
But I just kind of told you, it's like we just essentially generated $20,000 at a
thin air.
Yeah, that's right.
Every single month.
As long as the agents, we keep paying the credits, which is significantly cheaper
than $20,000 a month.
Significantly.
Yeah.
Yeah.
So that's the little lens, I think, we can help make them more clear.
That is a big deal.
But what I think is an even bigger deal is that once you implement this, your agent output
is completely dependent on the LLM that you're leveraging.
And guess what?
LLMs get better on a weekly to monthly basis.
They do.
It's crazy.
And they don't just get marginally better.
They get way better.
So do you think we're reaching some sort of like, you know, like, no way, absolutely not.
No, okay.
What is the dependency of improving these models rooted to?
Is it energy only?
I think there's innovation behind the weights, the individual models that they're using to
train.
Some of the LLMs are multimodal.
So maybe one day we wake up to Opus 5.0 and it can generate images for you.
Like that's a big deal, right?
And so all that is dependent on like the type of weights that they're using and the training
and all that's GPUs that they're running, all that is like turning this LLM that's eventually
going to be hosted by them and then offered through APIs.
Okay.
And then don't expect to slow down.
No way, dude.
No, no.
This is exponential.
This is like escape velocity type.
Depends.
I think Vitalik was out there.
No, no, no, no.
The day or two ago.
Don't listen to Vitalik.
Floor it.
Send it.
He literally said to like, we need to shut down these like data centers or something like
that.
Incorrect.
Yeah.
That's pretty.
So Elon has been saying that for a couple of years.
Yeah.
I was going to say like, I mean, they all signed a petition that's like, we all agree to
slow down.
Yeah.
But most did not.
And they're like, okay, well, let's pedal to the metal.
Yeah.
Suckers.
Yeah.
Yeah.
What are you going to do at this point?
You know, Vitalik, he wants to be on the right side of history, I guess, or like have
history point that it's like one of the guys he was warning us.
I think even if he's right, it doesn't matter.
Well, yeah, for sure.
It doesn't matter because it's like you're saying it's a race like a legit one.
Yeah.
Terminator is going to be walking around for sure.
Yeah.
So again, back to the Doomer articles.
That's the next one, right?
It's like it's 2030 now.
Yeah.
So fucking Terminator lose.
It's killed 40% of humanity.
That's crazy.
I guarantee you, citrus, citrus knee or citrus knee, uh, whatever, yeah, that's the next
Banger article is going to get everyone freaking the fuck out.
Yeah.
Two years later, after 20, eight apocalypse, like a series of these, yeah, it's like
saddest update.
I'm back from 2030.
Yeah, we didn't make it to Mars.
It's all your fault back there, 20, 26 years.
We should have listened to Vitalik.
Yeah.
I'm telling you.
So who knows?
But yeah, it's unstoppable and it's progressing so quickly, so.
So the real question is is, uh, okay, do accordingly, I guess you're using agents.
You have a ton of output.
So you're saying your output is just AI slop, right?
That's what you're telling me.
Why should I read your stuff?
Like, is there any value?
I guess the question is, is there any value in having AI produce anything?
Yeah.
I think it depends on, again, the, the context that the AI is pulling from it or to generate
it.
Yeah.
Yeah.
Yeah.
I think it depends on, again, the, the context that the AI is pulling from it or to generate
the substance.
Yeah.
Yeah.
The context is us.
It's like, if you, if you, if you've been relying or depending on our ability to learn
things and understand things and then therefore you think it's valuable content, then the, the
AI is going to be, is wrapping that up into like a, a bite of information.
Yeah.
And it's, it's more, it's more content than not, again, it's like, yeah, because it's
helping us.
It's giving us like extendability, more arms, basically.
Yeah.
Therefore we get to like actually, you know, maximally leverage all that work that goes
into like you're saying, well, what you feel is the fun times, what most people would
dread.
You know, it's like, let me spend my weekend learning about the latest model.
Yeah.
I think there's a lot of value in leveraging an agent to go and do a bunch of research
on a particular topic, process that information and give you like a comprehension on like
what it, what it processed.
So so what if, um, what if that agent took that processing, that comprehension and just
developed a tweet thread for you, right, isn't that tweet thread sort of valuable?
It is.
But again, this is part of what we have to kind of demonstrate.
It's like, because that's part of our process flow, right?
This is how we feel we can optimize things like marketing and such, right?
But we have an understanding like that's not good enough, right?
Because, you know, we have standards, we have integrity, we have these, these little,
we have like annoyances.
I find this to be core, this like, yeah, you know, AI is great and everything, but we've
got to inject like some, some criteria within it all to where like our authenticity of thought
is like, yeah, but all you're saying is you, we should take the output of the AI, make
sure you read it and you edit it and then you post it of some, yeah, some process flow
of that.
That way, somebody who's like running a business, you know, they have an understanding
of, okay, now, now I understand what you mean by like, I can deploy these agents.
It's better, yeah, produce better marketing content for my company, right?
Yeah.
Because right now we're not doing that.
So give me the exact flow and the process of how to do that, right?
Yeah.
And show me like the results you've gotten and hopefully you have some sort of positive
traction to back that up.
There needs to be a detectable effort in anything that is outputted by an AI.
So for example, there was this tweet, maybe it was like a couple of weeks ago where this
guy was dogging a net and in that tweet, it was like kind of long, but if you actually
read the tweet, you could tell it was AI written.
And I knew because one of the words in there for net was non arbitrage token.
Yeah.
I remember this.
And then no one has ever generated or built a non arbitrage token that doesn't even
make sense.
Those aren't even like words that can be combined.
That's what I mean.
Like you can't just that is not our process, right?
So like, right.
So that is an example of AI Slop now had he read it, his own AI output had he read it
and edited it to to say what he thought his understanding of the net token, then it
would have been an actual viable, you know, attempt of an argument.
Yeah, but he may not have spent like any time he didn't want to be it was clear.
It was obvious.
Maybe because like he probably did read it and he just didn't see that to be an error,
right?
It's such a surface level.
No, that's unacceptable.
If you're going to like, if you're going to let's say criticize the net token, I agree.
And you're reading your own AI Slop and you just kind of like gloss over non arbitrage
token, then I am telling you you didn't read it.
Yeah, well, then then he just has no idea what he's talking about.
Well, yeah, that's the point.
Yeah.
So that there is a middle part to this process where he's leveraging AI and he's
packaging what are supposed to be his thoughts and opinions into like, you know,
human readable format and through the social layer so that he could build
cloud, yeah, influence, whatever.
What's interesting is that the comments from that tweet, they're like, yeah, you're right.
And of you know, like they were all like kind of dog piling on top of like the sentiment
of it.
And I don't know those are just his like, you know, we call them, you know, his, yes,
man, his has his, because he had already like a prebuilt influence from being a Bitcoin
miner, Bitcoin, yeah, yeah, talk or whatever.
Yeah.
So those people are naturally just going to agree to whatever, yeah, because they're
not doing any analysis either.
And that's that's kind of the, and there's a responsibility there for public figures
who are, you know, supposed to be doing the due diligence, doing the influencing.
Yeah, if you're going to, if you claim yourself to have some sort of like position of
influence, then it has to be backed by something.
You know, if it's backed by bullshit, that's that get plenty of people get away with
that.
Yeah, they do.
Right.
But we, we feel like our, whatever influence we have or we'll have in the future is backed
by like authenticity.
Yeah.
And like, you know, this is our true opinion and thoughts on things and like not, you
know, that's, that's why you rely on us because that's hard to, we feel like that's hard
to find nowadays.
There's like a lot of genuine people out there, especially in the influencer, genuine
people who actually think, right, right, right, right, right, yeah, they're not getting
paid to say something, yeah, because, you know, that's that's how the influencer economy
will churns, right?
Yeah.
So there's that.
And it's like, we want to make sure that's like, you know, that's the context we give our
AI systems.
Yeah, to get us close enough, yeah, to get us close enough to like the desired output,
but yeah, every time you do have to edit everything.
Yeah.
So my point in this whole conversation flow is an understanding of like principles, I
guess, when building out like your, your agentic workflows and systems and models and such,
so that like when a new company applies this stuff or a new organization, whatever, they
understand these principles need to be maintained.
Because if they don't, you end up with the AI slop, like that guy put out, yeah.
And then once, once people find out about it, they realize how authentic you are.
That's a chart.
That's going to hurt your brand.
Yeah.
It's going to be a bad PR look.
That's why I dunked on them.
I quoted it and I explained that he didn't even read his own AI slop.
And then I broke down what I interpreted as criticism against net and they just don't
understand math.
It's literally that simple.
Yeah.
Yeah.
I mean, exponentials are hard.
They don't get me wrong.
It's really hard to understand like what an exponential means.
But just do the math, just go back to like algebra.
And yeah, I mean, so yeah, that's just one example that I can remember that's like the
most obvious, but we see slop all the time.
But AI output is not automatically slop is my point.
And so, so yeah, ultimately you're going to have to spend some time on it.
But so think of it this way, if AI allows you to produce a 10x, right?
And so before AI, you're producing a 1x because you're just human.
And so in that 1x environment, you are producing tweets, you know, doing YouTube videos, doing
all this stuff, creating thumbnails.
But you're doing all that yourself, like manual human labor because there's no automation
anywhere.
So now you transition to this 10x environment and you're still a human, you still have
to review everything, but you're reviewing 10x the output.
Yeah.
So instead of doing the manual labor of doing 1x, you're just reviewing everything that the
AI has generated.
And if you review it and edit and post it, now you're posting 10 times the thing that
you were just last week.
Right.
And so I think that is the new way to work.
100%.
Yeah.
Yeah.
It transitions people's like energy from like, I don't know.
Like lower rungs of functionality into like the higher rungs, meaning like people should
be thinking more like, and across the board, like leveraging their mental cognitive bandwidth
for things like strategy, you know, depth of identity and messaging and like, you know,
the things that are core to your brand and your personality, you think, and this is the
stuff that's not, you know, replicable through AI.
Yeah.
Yeah.
That's right.
Yeah.
And part of the grunt work is like, you know, it's hard.
You can spend a lot of time ideating, right, on things like if your role is to, you know, output
tweets, for example, you could spend all day just, and we couldn't, we've done, and this
is part of our personal struggle is like, we know we need to be tweeting a lot, because
that's part of humanity's game at this point.
It's like being socially present and outputting, you know, genuinely useful thoughts into
the ether.
Right?
Why?
Because our economy is like rooted to that or if not now, it will be.
Yeah.
More so over time.
Yeah.
But our struggle is like, I don't know, we're just not built that way.
No, I rather not talking.
I rather not say anything.
Same.
I prefer just to sit quietly and like learn and, I would talk a little bit, you know, whatever
sort of podcast is this?
Yeah.
We're in the wrong way.
That's true, dude.
But maybe talking's not the issues.
I rather not have attention for real, you know?
Yeah.
I was just thinking about, how do you do a podcast where no one talks?
It's not, it's more like a, like a TikTok dance, right?
You're just like, oh no, I couldn't do that either.
Well, what value do you have as a person if you're not willing to dance or talk, dude?
Like for real.
That's true.
Like, that's a good point.
What are you doing then?
Yeah.
Yeah, that's true.
If, like, if you were, if I'm a person who likes to do the research and likes to understand
and apply, but if I just apply, then I'm creating this stuff.
But I said, how do you, how do you get that creation out there?
You got to say something, right?
You have to.
Yeah.
In other words, how do you convert your absorption of knowledge into like something of economic
value?
Yeah.
Right.
If you just harbor all that learning to yourself, you've, you've produced zero economic
value.
Nobody's using your app.
No, no feedback.
Nothing.
Yeah.
You just feel a little nice inside.
Yeah.
I feel well-knowledged.
Yeah.
You know, you just walk around or earth.
It's like, I bet you I know more than that guy.
It's like frame-mogging just in your mind.
Pretty much.
I think there's a lot of people that walk around like, those are basically like nickbeards,
you know?
They walk around.
Oh, shit.
I'm one of them.
They walk around thinking, he's like, dude, I just know so much, you know?
It's like, I'm awesome, right?
But nobody agrees with you.
Yeah.
Yeah.
You got to put it out there, right?
And there's ways to do that, but it's not natural to our, I don't know, our personality
types.
Yeah.
For others, it's supernatural.
And like most of the stuff they're putting out there is like baseless or like, yeah, not
well informed.
Yeah.
It's just like, just vomit.
Yeah.
Right.
Yeah.
And they have like 100,000 subscribers.
Yeah.
It works.
Yeah.
That's weird.
So that's one of our struggles, right?
Let's create some new, agentic workflows or processes where we can get agents that spend
the bulk of the majority in the ideation phase.
So that it kind of like gets us past this wall of inertia, right?
Yeah.
We call it the activation energy.
Mm-hmm.
And it's been very useful, right?
Yeah.
So far.
Mostly for you.
I haven't applied it like at all yet because I'm still holding on to the little bit of
like the grass.
Something a little bit of a possibility that I could do it all on my own.
That's hilarious, dude.
I'm still trying.
I'm still trying.
The hilarious.
Even though I see the value in this like agentic stuff, I'm still fighting for humans.
You know, at the same time where you're losing.
I am losing.
Like big time.
I know that.
I see your Twitter account.
It was like, dude, you tweeted like eight times today.
I haven't even tweeted once yet.
But I know I got thoughts in there, right?
I could tweet something, but I just don't.
What the fuck's wrong with me?
Yeah.
Activation energy.
That's what it is.
So.
Yeah.
I think that's been AI's killer feature that nobody talks about as the activation energy.
But it's not enough to like set up a cloud by and all of a sudden you're tweeting every
day.
No, no, no.
There's got to be some effort put into it.
It's like, you got to put in your own persona, your own like ideologies and there's a lot
of massaging.
And then on top of that, you have to research, you have to discreet Twitter for like specific
topics and then distill that information into like your own voice.
So there's some effort.
Well, this is what I mean, dude.
You just broke down like a whole series of videos, a whole video, yeah.
Maybe multiple videos.
Like, yeah, how do you use AI to build your voice?
How do you use AI to scrape the internet to find out like what's socially relevant?
How do you use AI to then package all these things into like a regular cadence of
output?
Yeah.
It's like, how do I use AI to like ping me so that I'm like not out of the loop completely?
Yeah.
There's a lot to consider here.
Yeah.
On the top of that, it's like, okay, now your AI is producing your tweets, but where's
it putting it?
Is it just like texting you on telegram?
Like that's not very efficient.
How do you get eight tweets in the morning?
All on telegram.
Yeah.
Like, no, you got to put it in a place where you can access them and review them and
provide feedback to the AI.
Like there's a process to this thing.
Yeah.
But guess, to me, it feels very obvious because that's like first principles thinking.
Like, that is how it works, but most people probably don't think that way and they don't
know to like set up an infrastructure that allows your bot to produce these tweets and
review them and feedback and all that loop.
Correct.
Yeah.
So we got plenty of videos to make.
Yeah.
So that's what's in my head, right?
Like this feels like the new opportunity that all we got to do is like reach out and
grab it.
Yeah.
Pull it out.
Just like crypto, right?
And it's like, but that reaching out and grabbing it is like so like misunderstood.
So we have a vector of opportunity here to be like, you know, some content producers that
help people on board.
Yeah.
So are we forgetting crypto?
No.
No.
Not at all what I'm saying.
I mean, we literally just made a video right now talking about crypto because, you know,
well, some of the more interesting, one of the more interesting things that I've personally
done is using an agent to like facilitate an on-chain transaction on my behalf.
Like that's really cool.
Yeah.
Like just just the thought of that, you start to envision all the future things that will
follow, right?
Yeah, your experience right now was a very one X experience.
You directly, you directed your agent to go and mint something, but imagine if you created
a loop where your agent is going to mint, whatever had a lot of traction out there on Twitter.
And now it's like, while you're sleeping, it's good stuff.
Yeah, exactly.
Exactly.
It got me thinking it's like, okay, because I gave my agent the right skill, I informed
it of the right, I guess, process to prepare for this, the launch of this thing.
Well, well, to why I mint this thing versus the next thing, that's the most important
thing.
Yeah.
And then that's going to require like, again, a much more deeper relationship and understanding
of, I have to give it my context of like, what it is that generates my gut feeling of
an opportunity in these markets, right?
Yeah.
Because it's not, I haven't even like really figured that out myself.
Right?
Because you asked me that one day, it's like, how do you scrape Twitter like by doom scrolling?
Yeah.
How do you pick, how do you make an assessment of what, when you're doom scrolling, like this
is the thing, yeah, that we should probably pay attention to it, and then therefore make
a video about, yeah, yeah, whatever that is, you need to put it into a prompt for your
agent.
Yeah.
And really?
No, dude.
Okay.
That was my answer.
So here's how it is.
This is like, this is something that's not obvious either.
Is that process that you go to doom scroll on Twitter, find the thing that's the, the alpha.
And that process isn't a single prompt.
It is a vibe, right?
So daily, you're like, okay, your agent produces a daily briefing and here's like, here's
the alpha that I think you should pay attention to.
And 99% of that stuff is just pure trash.
So then what you go to the agent says, 99% of this stuff is pure trash.
Right, right.
What you need to do is actually look for this other thing and these stuff.
For the next day, it's like 90% trash.
So the feedback loop, in other words, yeah.
And it's, it's going to take a few weeks before it's like, it's pure alpha.
It's also figured out.
Yeah.
Yeah.
I could see that where it's like, yeah, give me things that based on, I don't know, some
surface level metrics of engagement that's like peaking people's attention.
Yeah.
Yeah.
Then there it is.
There's my list.
It's like, okay, I'm going to individually go through them, analyze them myself and then
leave a feedback note for the AI.
It's like, this one I would probably ignore because, you know, I see some social engagement
metrics here that they're obviously body.
Like social, yeah.
Yeah.
There's some red flags there that if I were to doom school and see that, I would immediately
swipe.
Swiping.
Yeah.
Okay.
Then I'll stop looking for those.
Yeah.
Right.
And then there's, there's other things where it's like, well, this project is really not
like introducing anything new.
It's kind of like a derivative of something that's already kind of popped off in the past.
And therefore, it's like less likely to actually garner any real interest, right?
Because the market, our understanding is they love new things.
Yeah.
They're just new, new cultural vectors of what can be done on chain or just new narratives
whatever.
Yeah.
So if it doesn't check that, then just stop, stop it, full stop.
Stop giving me that, that idea.
Yeah.
So you're right.
And again, this is part of the process flow that that should work.
Yeah.
That's the thing that is also not obvious is that you need to spend a lot of time with
your bot.
Your old day is sort of like you're texting somebody all day.
And it's a little weird at first, but then it's like every, every day you, you talk to
the bot, there's like a little bit more progress made.
And it pings that little dopamine part of your brain, where it's sort of like an RPG.
Like when you're playing World of Warcraft and you achieve like the next level, all of
a sudden, your attack power is plus 1%, like you have that dopamine hit and you want to
keep going.
And it's the same thing when chatting with this bot.
Yeah.
It's just not like obviously visualized, but you know what you could do is you could have
the bot generate an RPG gamification of your skill and progress.
Yeah.
What would that look like?
It would be like a little character with like armor and like a sword.
Yeah.
But what are the like milestones to, I don't know, your armor going from like, well, it's
like last week, you're wearing leather.
Now you're wearing chainbell, you know?
Well, last week you were, you generated 50 tweets and all those 50 tweets you had, you
know, 1000 likes.
Yeah.
True.
This week you had 3000 likes.
Fucking heavy metal.
Yeah.
Yeah.
You just, you just gained a level.
You achieved knighthood.
Right.
That's pretty cool.
Yeah.
Yeah.
Then that's like a reinforcement loop for yourself.
Right.
Yeah.
You didn't have to wait around for open AI to give you that correct that experience.
That's, yeah.
This is what I mean about how cool this thing is is like anything that you need, that
you feel like you're missing, you could just build it.
Yeah.
And one of the things that I built was a, like a health tracker app.
I take pictures of my food.
It's tracking my intake in terms of calories and my calorie expenditure at the gym and
it's putting it all together and it's tracking my macros every week.
And there's like a bar chart and everything.
Yeah.
And like, there's apps out there that could do that, but there's no way there's not
an app like that.
No, there, there kind of is, but there kind of isn't.
And but this is like a holistic thing.
It's like my assistant is not only like producing tweets, but it's also like tracking my health
and all that stuff.
Like, it's all there.
Hmm.
Interesting.
And it's in its custom built, which is really cool.
And I think the next wave of user interfaces and apps are a completely custom built
applications.
Yeah.
Because that's, that's what these computers are supposed to do.
Like you and I are running Twitter right now.
They look like the same thing, but sometimes I don't want to see all this like trash on
the right, on the left.
I should be able to customize my view on Twitter.
Interesting.
And you should be able to do that.
I mean, this, this interface is just a bunch of API calls.
Mm-hmm.
Like, there's nothing unique or special about this.
Right.
This is Elon's interpretation of what Twitter should look like.
And maybe this is not the best implementation of Twitter.
Probably not, but if you're to like implement your own version of it, it obviously like
lose track of a lot of the features of Twitter that are here on like your peripherals,
essentially.
So like, I don't know.
You may forget that you, yeah, but that's your agent's responsibility.
Yeah.
It's like, hey, I'm missing some information from my Twitter feed.
It's like, oh, you're talking about the engagement, the social engagement stuff.
Like, yeah, just, okay, I'll add it in there.
Yeah.
I guess.
Or if you just, for whatever reason, like Twitter space is just, you hate the even the
side of them.
Yeah.
You could just eliminate the whole right side of your screen, basically.
And there's people who have built interfaces for Twitter, right?
You just download the app and it gives you like a different interface.
And so it's, it's like that, except you are the developer, you could build that yourself
if you want it.
Yeah.
Yeah.
So all this is super cool.
But, but yeah, it's so if we're going to go down some, you know, I don't know, pathway
where we're trying, we're putting our best foot out there and demonstrating how this
technology can be applied.
It still feels like we're missing a hook, right, where it's like, and if the hook that
came to me was, you know, we're already on a mission to like build a competing product
to pump that fun, right?
You guys all know that.
It's called natto fun.
It's in the pre-production phases of we'll launch hopefully within the weeks, yeah, the
next few weeks.
But that really is the goal.
So we have a target in mind is like, we're pumped out fun is a viable business in the
crypto industry.
And it's a, it's a billion dollar business.
Yep.
Already, the kind of like the, the golden goose of a narrative in the AI agentic spaces,
like there's going to, at some point in time, there's going to be some solo pranora out
there that achieves a billion dollar business, right?
Yeah.
Now, obviously, we're not alone.
Yeah.
But we're like, you know, some basket pranors who have a basket of a few of us, we're
not some full to achieve a billion dollar company, usually need like hundreds of people.
Yeah.
But like maybe, like we can actually throw our name in the hat of like people who are
attempting to build a billion dollar agentic startup.
Yeah.
Yeah.
Well, it's no different than I think Tether has like maybe 30 to 40 people and they're
generating multiple billions per quarter.
Yeah.
But it's also Tether.
It's like does nothing.
Yeah.
It does nothing.
But the point is is that you actually didn't need a lot of people to do nothing right to
be Tether.
Yeah.
For sure.
But, you know, because that's why like people are already kind of like pointing at, wasn't
the name Steinberg?
Peter Steinberg.
Peter Steinberg.
He's already the guy.
He already...
Yeah.
I would qualify.
I mean, it's not the most perfect ideal like solar pranor billionaire, but I mean, he did
something pretty big.
So for sure.
Yeah.
I'm not knocking what he did.
I'm just saying, I don't know if he qualifies to be like the winner of that accolade.
Just because like what he did isn't like some systemic replicable model.
No.
From a business perspective, no, he didn't do that.
But he did vibe code, open claw.
And so that basically is valued at a billion dollars.
Yeah.
He basically got accolired and they probably spent probably near a billion dollars to get
him to work at open AI.
Yeah.
It's just okay.
You...
Literally he's in flexion point two in the timeline of AI so far.
He's like, oh, Chatchy VT.
Yeah.
Open claw.
So it's like, we got to wait another five years.
For the next one, it's like, we can't.
That can't be it.
There's got to be something.
Definitely.
There has to be something where I don't know that that gives everybody else hope who's
like trying to build something of value that like I can apply something, some rigid structure
of something that exists out there.
Yeah.
Hopefully where the one's building that.
Yeah.
For sure it's going to happen just because everyone has access to these tools and these
tools allow you to build pretty much anything.
So your creativity is really the limitation.
But also your ability to understand how things work is also a limitation too because
if you're building an application, there's no security and you go and deploy it, you're
not going to be a billion dollar business.
Right.
It's not going to happen.
Right.
Even if it's a really good idea, but there's poor security implementation, you're done.
So you do have to spend some time understanding architecture of software.
Yeah.
100%.
And that's where if you're a solopreneur, you may be gimped in that respect, right?
Because you just don't have that kind of experience rolling out of that.
And then again, the AI is getting better.
So you could just like ask the AI, Claw just released a security like plugin.
Yeah.
It's like, why don't we deploy, before we deploy this app, let's just run this like,
this plugin to make sure it's secure.
Right.
And like, you get to at least do that, but you need to know that that exists because if
you don't, it's as good as not existing at all.
Yeah.
So maybe that's what needs to be put together here.
It's like some literally some out of the box, like Microsoft office type of setup here
for how do I apply all these like, core tenants of building, you know, a company, a product,
whatever that keeps in mind all these very core, you know, concerns or, you know, checkpoints
for the average consumer, right?
Who may not know that these things need to be considered.
So it's like, you know, pull from this, you know, digital box that has this program already
set.
Yeah.
And that just gets automatically injected into your, your, your agent.
Yeah.
Yeah.
Yeah.
It's understanding its context awareness of like how we should not just build, but like
scale and maintain these very critical points, right?
Like that's very valuable.
And I think that's something we can, we can offer over time through our own personal
experience.
And I think we'll be archived here.
Yep.
You know, so you can go back and see, like, oh, shit, that's when they figured out, maybe
it's like when we launched the platform, we do discover a vulnerability bug.
Yeah.
All right, let's, we're going to, we're going to have to handle this in real time.
Yep.
We can't go out there and get like some security auditor.
No.
We're going to spin up.
Yeah, claw code.
Just our way out of this pickle, do it.
And then once it's done, it's like, all right, well, that's been cemented into this
process.
Yeah.
Yeah.
So let us know if you guys want to know more about how we're setting up our agentic kind
of enterprise and, uh, and all the details that go along with like setting it up and then
actually leveraging it to the point where it's producing 10 times the content in a quality
way, right?
You don't want to produce the slot, which we were talking about earlier.
So, uh, so yeah, give us your thoughts in the comment section below and, uh, and specifically
what do you want to know about this AI space that you want to implement that you haven't
really gotten around to implementing it and, uh, where you need to kind of need to help.
Because I think that's going to be the most important part because you have to start,
right?
If you don't do this now, you're going to be forced to do it later while everyone's already
done doing it.
Yeah.
Right.
So it's going to happen.
This is not one of those things where you can just like forget about.
So, uh, so yeah, that's it for us.
So again, let us know in the comment section below like what your thoughts are on this
whole AI thing.
And, um, and yeah, I appreciate you guys for watching and we'll catch you in the next
podcast.
Peace.
Thank you for listening to the Block Runner podcast.
Make sure you visit our website, theblockrunner.com, and sign up to stay up to date on the latest
script in crypto.
Also reach out to us on Twitter and the Block Runner.

The Block Runner Crypto Podcast

The Block Runner Crypto Podcast

The Block Runner Crypto Podcast