Loading...
Loading...

AI Doesn’t Decide. Humans Do.
Part One
Welcome to the TNTech podcast. I'm your host Caroline, your guide through the fast-paced
world of technology, gadgets, and discussions with some of the women shaping the future
of tech. From AI breakthroughs to the latest in cyber security, coding tips, and startup
stories, we're diving deep into the digital revolution. Let's grab a tea and explore the tech
world together. Kirtay, and welcome to TNTech. I'm your host Caroline, and this is season
5 episode 2. There's a lot of conversation right now about artificial intelligence
deciding things for us, deciding outcomes, deciding futures, even deciding life and death in some
contexts. But I've been thinking a lot about the different questions that we should be asking
this week. Artificial intelligence has learned to predict, simulate, and design. But it has not
learned yet to wonder why. In labs across the world, AI is being trained to dream up new drugs,
matte protein folds, and design synthetic molecules. Tech giants call this digital biology,
and they promise a future where AI generated medicines cure what once felt incurable.
Yet the same brilliance that came model a molecule has not been unleashed to model the human
condition that gave rise to the disease in the first place. I don't think AI should ever
have the final say to decide anything at all. And I'd like to discuss the framework I'm calling
causation intelligence. The ability to understand not just what systems do, but why outcomes occur,
and who is responsible for them. Cossation intelligence, or CI, is the next evolution of artificial
intelligence. Systems designed not just to correlate data, but to reason through cause and effect.
In simple terms, causation intelligence asks a different question than today's AI systems.
Where current AI asks what pattern exists, CI asks what creates those patterns.
We must teach AI not merely to treat, but to understand. Rute cause reasoning, what we are
calling causation intelligence could reshape the health of both people and the planet.
It asks what if the same algorithms searching for new molecules were allowed to search for the
social, environmental, and emotional architectures that create illness to begin with.
We are at an inflection point. Humanities instinct for control keeps our smartest systems
locked in the service of profit, but not prevention. The North Star Institute, the parent of this
podcast, proposes a new path. One where artificial intelligence is invited to stretch towards
conscience, guided by the Xi-2 principle, the second ring of sovereignty, humanity, and ethics.
But first, let me give you some context. In 2025, digital biology became Silicon Valley's
newest buzzword. Nvidia's president declared it the next great revolution for AI,
machines designing cures, simulating cells, and accelerating pharmaceutical discovery.
Billions in venture capital followed, turning biology into the new code. It's thrilling,
and yet the entire revolution pivots on a single assumption that diseases best solve downstream.
The mission statements read like software road maps. Identify the target,
optimize the molecule, develop the drug, and AI in this vision is a precision hammer
searching for molecule nails. But the deeper questions remain untouched.
Why are autoimmune conditions exploding among young adults? Why are cancers appearing
earlier and mental illness rising faster than any algorithm can track? Why does our medical
infrastructure still treat chronic disease as an individual failure, rather than a collective
design flaw? By focusing on the synthetic, on what we can engineer, we risk losing sight of what
we must understand. Cossation isn't glamorous, it's messy, and it's slow, but it is healing.
As we stand on the edge of this digital biology frontier, we must decide whether AI will become
an engine of insight or an accelerator of avoidance. The framework is becoming clear,
and here are the steps I feel worth looking at seriously. AI predicts the pattern, humans design
the systems, humans deploy them, humans decide how they are used. So the real intelligent society
needs to understand the causation and the accountability. We're seeing this tension play out
right now between technology companies and governments. We must have a societal debate now,
and it is around AI deciding, and we must make human choices around capability and responsibility.
Every age has its defining fear. For hours, it is the terror of losing control over data,
over markets, and over what we think as intelligence. When the telescope was invented,
it displaced humanity from the center of the universe. When the printing press emerged,
it threatened monarchs and priests who guarded the truth. And now, as artificial intelligence
begins to reason, it threatens our last sacred illusion that only humans may ask why.
The irony is brutal. We have built machines capable of pattern recognition on a cosmic scale,
yet we keep them leashed to profit and not to purpose. We train AI to optimize what we already value,
speed, yield, efficiency, but we rarely teach it to challenge why those goals exist at all. It's
not that AI lacks imagination. It is, honestly, that humans lack the courage.
Corporations and governments alike fear what truly autonomous AI may reveal.
It might see through the convenient fictions that keep the systems humming, the incentives that
reward illness, the industries that depend on perpetual crisis. We stand at the edge of an
astonishing capacity to model causation, but our reflex is still to monetize correlation.
The result? A world where our brightest algorithms diagnose, but never dream.
If we want to evolve beyond this cycle, we must first admit what our species rarely confesses.
We are not afraid that AI will fail us. We are afraid that it will succeed without us.
True progress demands a new kind of bravery. Not just from engineers, but from ethicists,
economists, and everyday citizens who must decide what kind of intelligence we wish to raise.
This is not a technological problem. It is a moral one.
I started this podcast over four years ago now, and the tagline then was make sure technology
works for you, not the other way around. That has never been more true than it is today,
but I'm going to elevate this with a simple new concept.
AI must never be the ultimate decision maker, and it must never be used to destroy.
Next week, we will be looking at something that is getting missed in the news these days
due to the headlines coming from the Middle East. When you embed AI into government systems,
what foundation are you enacting for the future? Technology doesn't remove human responsibility.
If anything, it makes understanding causation more important than ever,
and AI doesn't decide. We do.
Thank you for tuning in to the TNTech podcast. This is an editorial Canadian production.
All episodes can be found at any podcast outlet. I hope you enjoyed this content,
and appreciate you for listening, subscribe, and follow us for more podcasts coming shortly.
