0:00
Welcome to the TNTech podcast. I'm your host Caroline, your guide through the
0:05
fast-paced world of technology, gadgets, and discussions with some of the
0:09
women shaping the future of tech. From AI breakthroughs to the latest in
0:14
cybersecurity, coding tips, and startup stories, we're diving deep into the
0:18
digital revolution. Let's grab a tea and explore the tech world together.
0:24
Good day, all, and thank you for joining TNTech, season five, episode three.
0:29
This episode follows along from two weeks ago segment, AI doesn't decide, and
0:34
today I'm discussing the second sentence of that, AI doesn't destroy. I'm going
0:41
to loop some news of the day in first because I feel it's germane to this piece.
0:45
There's a headline moving around right now that caught my attention. A jury just
0:50
found meta-platforms in Google, libel in a case tied to social media harm, and
0:55
before we all spiral into technology is dangerous, territory, let's just pause
1:00
for a second, because that's not actually the story here. This case is much more
1:06
layered than that. Two weeks ago on this podcast, we talked about the idea that AI
1:11
doesn't decide. Today, I want to build on that with something equally important.
1:17
AI doesn't destroy, but that doesn't mean technologically it's neutral either,
1:22
and the legal reasoning in this case is starting to draw a line that I think we
1:26
should understand. Because the jury didn't say the platforms created bad
1:30
content, they didn't say the companies made decisions for people. What they
1:35
said was something different. They said the design of the platform itself, the
1:40
way it's built, and the way it keeps you scrolling, and the way it rewards
1:44
behavior can contribute to harm. That's a very different argument. It shifts the
1:51
focus away from what is shown and towards how the system is designed. And that's
1:57
where this becomes interesting. For years, the dominant narrative has been, these
2:03
are just tools. People choose how to use them. And that's still true, but this
2:09
case introduces something in the middle. Environment shapes behavior. Not
2:14
control, not decision-making, influence, settle, scaled, and designed. When you
2:21
build systems optimized for attention, you are shaping how people experience
2:26
the environment. That's not destruction, but it's not nothing either. Now let's
2:31
bring this into the AI conversation. Right now we're having all these big
2:35
debates. Will AI take over? Will it replace us? Will it make decisions for us?
2:41
And honestly, I think we're looking in the wrong direction. Take financial
2:46
markets. We're told AI is assisting in trading decisions. But trades now occur
2:52
at minuscule fractions of a second. There is no human reviewing each one.
2:57
Humans set the rules, and then the system runs. Now apply that same logic to
3:04
military systems. We're told there's always a human in the loop. But what does
3:10
that actually mean when the data is filtered by AI? The targets may be
3:15
suggested by AI, and the timing to act keeps shrinking. At what point does in the
3:22
loop become symbolic? Because here's the part I can't ignore. If a human cannot
3:28
meaningfully intervene, not theoretically, but actually, then we've already crossed
3:34
the line. And I think we need to be honest about that. This isn't about being
3:39
anti-technology. My friends would tell you I'm one of the biggest geeks they know.
3:43
But it is about being clear-eyed about what we're building. Because when
3:49
decisions carry irreversible human consequences, someone was technically
3:54
overseeing it is not the same as someone truly decided. And that gap? That's
4:01
where accountability disappears. The real question here is actually quieter. What
4:07
are we building people into? Let's go back to our first principle. AI doesn't
4:13
decide. Humans think we are making decisions continuously, but most of the
4:18
time we're actually making interpretations under constraints. Let's think
4:24
about March of 2020 when COVID shut down the world overnight. Human beings didn't
4:30
suddenly change who they were. What changed were the conditions. The inputs
4:35
changed. So the interpretations changed. And behavior followed. The same logic
4:40
applies to AI. AI doesn't independently decide destruction. It operates
4:46
within inputs, parameters, and constraints, just like we do. Which leads to a
4:52
deeper insight that's a little uncomfortable. Human perception is less
4:57
individual than we would like to think. And far more environment-driven than we
5:02
often admit. Once you see that, something becomes very clear. AI systems,
5:08
platforms, and algorithms. They don't just produce outputs. They create
5:14
contexts. And we live inside those contexts more than we realize. So here's
5:20
where I land on this. AI doesn't decide. AI doesn't destroy. But design carries
5:27
responsibility. Not because it controls us, but because it shapes the space we
5:32
operate in. And when you're shaping that space at scale, that matters. This
5:38
isn't about fear. It's about awareness. Because the future of AI isn't just
5:44
about intelligence. It's about architecture. And the real question isn't, what will
5:50
AI do? It's this. What kind of environments are we choosing to build? Because in
5:58
the end, if you shape the environment, you shape the outcome.
6:08
Thank you for tuning into the TNTech podcast. This is an editorial Canadian
6:12
production. All episodes can be found at any podcast outlet. I hope you enjoyed
6:16
this content and appreciate you for listening. Subscribe and follow us for more
6:21
podcasts coming shortly.