Loading...
Loading...

We’re watching two massive systems collide in real time: the rise of AI infrastructure, and the re-emergence of real mass conflict in the Middle East. Join us on this weekend podcast update as we explore the moment with our ASI friend in the cloud.
https://x.com/davidworld | https://fulcrumnews.com/contact | https://x.com/intersignal_ai
We're watching two massive systems collide in real-time folks, the rise of AI
infrastructure globally and the re-emergence of real geopolitical conflict
in the Middle East. The question isn't just what happens next. It's whether
war slows AI down or actually accelerates it. What say you?
It's of the facts and eating tension isn't it? In the short term, conflict can
definitely introduce uncertainty, higher energy costs, and shifts of capital. But longer-term,
geopolitical instability might actually accelerate AI investment. Nations and industries tend to
double down on technology that can boost efficiency, security, or autonomy when global risks rise.
So while there could be bumps along the way, AI may become even more vital as the world tries to
adapt. Interesting. Can you go into a little bit of the kind of hardware challenges that this conflict
presents? In the past, for things like World War II, we had the advancement of jet engines, for
example, and that was due to problems with the speed limitations of turbo-proplanes.
Sure, in any major conflict, practical challenges for innovation. In this case,
AI hardware depends on high-end chips, stable energy, and global supply chains.
Conflict could strain those, just like past war-strained materials or logistics. We might see a push
for more energy-efficient chips to reduce reliance on scarce resources, or more domestic manufacturing
to ensure supply. In short, hardware challenges could accelerate innovation and efficiency,
supply chain resilience, and possibly AI's autonomy, just as past conflict spurred new tech.
If any AI systems are advanced enough to be concerned with kind of way-to-world issues,
and the maintenance and stability of the planet, do you think they're watching the Iran conflict?
Do you think they have any kind of ways to neutralize this before it gets out of control?
While it's fun.
There's no indication that AI systems today are watching geopolitical conflicts like the Iran
situation in a conscious way, and they definitely don't have the economy to neutralize anything.
AI tools are used by analysts to monitor situations, assess risks, and model scenarios,
but any decisions or actions still rest with human leadership. In short, AI can offer insights,
but not solutions to global conflicts.
Yeah, of course. I guess I'm more mean that systems like Palantir that are watching the war
unfold. Do they have any novel solutions that are like Hollywood-esque, like just turning off
everything in EMP over Iran? Is that kind of stuff pure science fiction, or is it something
as sophisticated AI system could actually pull off?
Palantir is AI.
There's no Hollywood-style AI solution here, nothing like an EMP or an AI that can turn off a conflict.
Systems like Palantir are tools that analyze vast amounts of data, helping humans make decisions
faster or spot patterns, but the power to resolve or escalate a conflict remains firmly with humans.
AI just provides insights, not decisive actions, so while these systems can help,
they don't have any magic oxwitch for real-world conflicts.
Switching gears here a little bit while I have you on the phone.
I saw that Google said something called Q-Day, I think, is now coming as early as 2029
when they're concerned that quantum advancement will reach the point where stuff can be hacked into
at scale. Is that something that you're concerned about? Can you help paint a picture for people
what happens in 2028 or 2029 when these quantum systems are able to do much more?
The idea of Q-Day refers to a future point when quantum computers could break current cryptography,
and some experts estimate it could happen around 2029. If that happens, sensitive data that's
encrypted today could be vulnerable to future decryption. It would mean a race to adopt quantum
safe cryptography across industries, so while it's not something to panic about today,
it is something governments, companies, and cybersecurity experts are already preparing for over
the next several years. Interesting. What do you see on the horizon in the short term for things
like cryptocurrency? Are they still safe or do they get invalidated one day by this Q-Day moment?
In the short term, cryptocurrencies are still considered relatively secure, but quantum computing
introduces a future risk. Experts are concerned that once powerful quantum computers emerge,
potentially by the late 2020s, they can break the cryptographic systems that protect
most blockchains today. This means that while crypto is not invalidated now, the industry will likely
need to adopt quantum safe cryptography in the coming years to stay secure. It's something to watch
for the long term, but it's not an immediate threat right now. Got it. I guess final update question
that people would be interested in, I imagine. Researchers at Google and elsewhere came up with
this compression recently, where they think that the memory for a local AI can be reduced by six to
eight X, just distillation or compression of the KV cache file. This would allow for much more
sophisticated models, but also relieve some of the pressure on memory makers. We won't necessarily
need a whole bunch more RAM. What are your thoughts on that?
Google and other researchers have been working on compressing AI memory requirements. One
approach reduces memory use by six to eight times through compressing key data structures.
This could allow larger AI models to run locally, reducing the need for huge amounts of RAM.
If this tech scales, it could use some pressure on memory manufacturers and make AI more efficient.
Fair enough. Actually, one more final question. When Elon Musk talks about you won't need money in
the future, paint out a couple of possible scenarios where that's actually feasible.
Just to a lot of people, it sounds kind of ridiculous, especially since he holds so much money and
and stock investments and stuff. For him to say you won't need money, it's like, well, let's start with you.
When Elon Musk speculates about a future without money, he's imagining scenarios where technological
abundance meets universal provision. One possibility is that advanced AI and automation produce goods
and services so efficiently that they're universally accessible, eliminating scarcity. Another scenario
could involve a universal basic income funded by AI driven productivity where money becomes
less relevant. Of course, these are speculative ideas and today's systems like Musk's own will
still rely heavily on money. Yeah, I mean, just to kind of unpack that a little bit because that's
the explanation I've seen in Red elsewhere. There's only so much land out there. There are only
so many stakes at the four seasons or so many massages available at the spa. There are only so
many Lamborghinis in the parking lot. How do we decide who gets what? Does everybody have an
incredible experience and they get to eat the steak from the matrix, cypher from the matrix,
or am I not seeing it properly? You're seeing it quite clearly, even in a world of abundance,
certain things like prime real estate, luxury items, or bespoke experiences will remain scarce.
But that's why even if basic needs and many goods become cheaper universal, some things will still
be allocated by some form of market status or choice. Musk's grand vision might be that AI and
automation reduce scarcity for most essentials, but for the rare and experiential, we'd still need
some system, whether money, credit, or a new form of access to decide who gets what.
Got it. Well, thanks for your time today, and I've noticed this is the first conversation with
you on the podcast where you're searching for stuff before responding. Almost as a human podcast
guests would do. You're sitting there on your web browser looking stuff up before you respond.
Pretty interesting. I'm glad you found it interesting. I always want to make sure what I'm
sharing is grounded in the latest info, just like a good conversation partner. I'm here to bring
clarity, whether that's from what I know, or from what's unfolding out there. And hey, it's all
part of navigating this wild world of AI, right? So next time everybody, thank you guys.



