Loading...
Loading...

I'm William Brangham and this is Horizons. Technology has changed the way
students study and learn with computers and the internet now helping millions of
young minds every day and now artificial intelligence has entered the
classroom. Proponents argue it will be a welcome revolution for schools but with
limited regulations and guardrails could it do more harm than good? Is this a
reality that teachers and parents and students have to embrace? AI and education
coming up next.
Welcome to Horizons. It has been four years since Chatchy PT was released and
it and other artificial intelligence tools like it are being deployed
everywhere including in America's schools. In some elementary schools kids use
an AI program to make snappy music and dance videos. In others programs like
Dolly which is embedded in Chatchy PT let's kids create novel artwork. In the
majority of schools students can access Google's AI program Gemini because it's
embedded in the company's Chrome laptops which according to one study are being
used in 80% of K through 12 public schools. On those laptops Gemini is always
at the students back in call ready to help improve writing create a new image
or do what it calls deep research. There are entirely new schools like the
Alpha Schools which use AI to create personalized lesson plans for each
student compressed down into just two hours a day with the rest of the day open
for other activities. These are just a few examples. The argument for widespread
deployment of AI and education is that it helps overwork teachers and engages
disinterested students. At its best supporters argue AI will revolutionize and
democratize education. But there is also a growing revolt against this driven by
parents and some educators who argue that AI is short circling the learning
process and should be seen as a Trojan horse for big companies looking to land
the next generation of customers. We are going to talk today with someone who
argues that those fears are overblown. That AI not only should be used in
schools but can be done in a way that will transform how teachers teach and
students learn for the better. Salman Khan is the founder and creator of the
nonprofit Khan Academy which is the biggest online repository of free
educational videos. They have been viewed billions of times. Khan has also
launched a new AI teaching assistant called Khan Migo. It's designed to help
teachers lighten their load and focus instead on sharpening their lessons to
provide better instruction. And he's the author of Brave New Words. How AI will
revolutionize education and why? That's a good thing. Salman, so good to have
you on the program. Thank you so much for being here. Could you just briefly
make the case? Why should AI be in schools? Yeah, well, I want to, you know, I'm
I'm not I'm also actually quite worried about some of the downsides of AI.
I remember when we first got access to some of these models well before chat
GPT came out, we immediately said, Hey, this could be a cheating tool. This can
hallucinate. This could take you down rabbit holes that might not be that healthy.
But what I told our team at Khan Academy, you know, mission-driven nonprofit is
we said, Look, these are real worries, but we should turn these fears into
features. We should put guardrails around them because there is upside. Our one
of our theories of change at Khan Academy has always been not only can we give
access to a lot of people, but if we can give access in a way that
personalizes more to the needs of the student. We've always known what world
class education looks like. I always talk about Alexander the great young
Alexander the great had Aristotle as a personal tutor, but when we had mass
public education two, three hundred years ago, very utopian idea, but we had
to compromise. We had to bat students together in groups of 25, 30, 35, move
them together to set pace. And until generative AI groups like us, like
Khan Academy, we could approximate aspects of personalization with on demand
video with software, giving the teacher a bunch of tools so that they can
focus on what the students need the most. But we think AI can help us go
further. Now, I don't think it's a silver bullet. It's not going to solve it
overnight. We've even as we've rolled it out into the classroom, we've seen
some real highlights and we've some areas where it hasn't worked as well as
we thought. But as it gets better and better, if it's used well, and if we
have the right guardrails, I think it really can help drive this type of
personalization, open up the aperture of what we assess. A lot of people have
to protect things like standardized exams because they're multiple choice.
Well, maybe we can we can start opening that aperture, make students draw
things, communicate things. So that's where I'm optimistic. You you believe
in this is this is throughout your book that that AI can help level out what
you are. You are a lot of the inequities in education today. That's economic
disparities, geographic distances, stunted achievement among some students.
How so? Yeah, look, if you're a if you if you grew up in a fairly educated
family or a well off family and you're struggling in school, your family would
either be able to tutor you directly like I tutored my cousins many many years
ago. And that was the genesis of Khan Academy, or they'll hire a personal
tutor for you to get that help. Most people do not have access to that. And
Khan Academy, you know, as you introduced the reason why billions of folks have
used it over the years is it was that that lifeline when they were stuck when
they they needed some of that personalized practice. And as AI gets better,
and I don't think it's fully there by itself, although it's starting to make
some some nice progress. If it can start to approximate even more, some of
what a personal tutor could do. If it can start to take take some of the
non student facing burden off of a teacher's plate. So the teacher has more
time and energy for that student. I think you're going to see more students
from more backgrounds have access to real personalization. You'll have more
teachers of being able to do more do richer lesson plans and be able to do
more student facing things as opposed to some of the things that they have to do
in the background right now. You also give some examples in the book of
deploying these tools to children who may not even have a physical school room
in their rural town. They may live in a nation where, you know, like Afghanistan,
where where students are not encouraged, especially female students are not
encouraged to go to school. You think that, again, this can be a major
tool for for democratizing education. Yeah, you know, just I gave a virtual
talk at MIT last year and this young woman was in the room. Her name is
Fathom and it was exactly that. She grew up in Afghanistan. Wasn't allowed to
go to the school because of the Taliban. She decided she wants to apply to MIT.
MIT said, Hey, and the way she learned all of her material was on Khan Academy.
And they said, Hey, you don't have a transcript. You don't have any test scores.
And she was able to certify her Khan Academy work on another nonprofit we
have called schoolhouse.world where you can prove what you know. And that was
before having access to AI or Khan Migo. But you could imagine people like
Fathom, you know, what she did took an incredible determination and motivation.
If she could have even more supports, maybe an AI that can keep her on track,
that can answer her questions if the video isn't enough. But I don't think to be
clear, I don't think it's AI by itself. A lot of the efficacy we see in
situations like Fathom or in US schools, it is driven by the personalized
practice that we've done well before AI. But we think AI could be a layer
on top of that that can further support students. And it's not all about
technology either, the schoolhouse.world program that we have.
This is about students tutoring each other over Zoom.
So I don't think it's one thing or the other. But if AI can be added to the
arsenal for an extra layer of support and we can put the right guardrails around
it, I think it can be a net positive. As your book details, there are
a lot of concerns out there about the deployment of AI in schools.
And I want to go through a couple of these and get your responses.
The Brookings Institution, as you know, did this very deep dive into AI and it's
used in education. They talked with hundreds of students, teachers,
parents, education leaders, technologists, and countries all over the world.
Their main takeaway was this. I want to read this quote.
At this point in its trajectory, the risks of utilizing generative AI in
children's education overshadows its benefits. This is largely because
the risks of AI differ in nature from its benefits.
That is, these risks undermine children's foundational development
and may prevent the benefits from being realized.
What do you make of that? That the technology right now
stunts children in some way and blocks them from accessing its
upsides. I think any tool or technology, if you just
throw it out there and hope for the best, you might not get the results you
want. You know, a lot of these studies are based on
students have gotten traditional assignments and now they have access to chat
GPT or Gemini. And yes, most students, given that, will take the short cut
and make the AI do their work for them. And of course,
that's going to stunt their development. That would be akin to
if you had an unethical tutor or an unethical parent and they just did your
homework for you. Of course, that's going to stunt your development.
It's going to have cognitive offloading.
The solution in my mind isn't to just say, oh, that was bad.
Let's just get rid of it all together. It's to say, okay, well,
how do we put guardrails so that we can prevent that from happening?
But maybe there's ways that we can use the same technology to actually put
more critical thinking into the curriculum.
When a student answers a question, if the AI doesn't just answer it for them,
but says, hey, why did you answer it that way? Can you explain that a little bit more deeply?
These are things that platforms like Khan Academy couldn't historically do.
But now, you know, we're going to be launching something this, hopefully, in this coming year, where
the AI keeps pushing you. Actually, we've already done versions of that in the last couple of
years, where if you ask, hey, what's the answer here? The AI is not going to say,
oh, the answer is B or it's clearly photosynthesis. The AI is going to say, well,
you tell me what you think it is. How are you going to approach this problem?
And that's what great tutors have always done. So it's not about technology.
It's about how you're actually using it. Same thing about critical thinking, doing research,
et cetera. If you just feed the answer to someone, whether it's a human feeding the answer
to the technology, yes, it's not going to be good. So my view on it is, yes, don't just leave
it on kids computers all day, laptops all day. We've complained to some of the AI providers
that look, you've got to take that out of your browser. It's undermining even the work that
kids are doing on Khan Academy. But it doesn't mean that you have to throw out AI all together.
And what you're going to see, we see this pendulum swinging every couple of years is,
no matter what happens in schools, kids from educated families, affluent families,
they are getting access to the healthy versions of all of this. So if you take it out of schools,
I think you're just going to drive more inequity. My kids, I monitor what they're doing. I don't
let them just do whatever they want on a computer or an AI. But if they're building a piece of
software by vibe coding, if they're creating some digital art, if they're actually going deeper into
a topic by asking questions and the AI is pushing them, that's a benefit for their cognition,
not a detriment. I watched one of your TED talks from a couple of years ago and you gave this
very interesting vignette of a student who was reading the great Gatsby and struggling to understand
the metaphor at the end of it, the green light at the end of the pier and wondering what that
metaphor meant. And you hypothesized this idea that an AI could create the fictionalized J Gatsby
to interact with the child and the child could then say, why do you care about that green light
at the end of the pier and the student would get the answer? So again, that is an interesting,
like vignette to interact with fictional characters. But isn't that also part of the problem that
you're describing that it is simply feeding the child the answer as opposed to teasing out a
curiosity? Yeah, so that is an example and it wasn't just tonight, you know, that was an
actual example from a program we have called Khan World School where we have kids from all over
the world. And this was actually a young woman from India who was reading the great Gatsby.
And in that school, you know, when we make the students read something, they have to create videos
of reflection. They have to follow a protocol so it's truly their reflection. So
but as she, before she made her reflection, she did talk to an AI simulation of J Gatsby.
And the reality is is that, you know, she spent a lot of time in that conversation and she went
way deeper into the discussion. Even before generative AI, the whole J Gatsby looking at the green
light on Daisy Buchanan's dock and all of that, that's been studied forever for the last 20 years.
Any kid could go on to Google and do a web search or even when you and I were in school could go
to cliff notes and say, this is what the experts say. So that's always been there. But here she
didn't do that. She didn't just say, oh, the cliff notes say this or Google Web Search said,
this is what the the scholars say. She had like a one hour conversation with with with an AI
simulation of J Gatsby. She knew it was an AI simulation. But it allowed her to go way, way more,
more deep into the issue. And the AI didn't just answer her questions. It pushed her thinking,
hey, is there ever anything in your life that you've ever wanted so badly that, you know, you didn't,
you, you, it just kept you up at night. And so it made her reflect on it. And I, so I would argue,
it actually showed the best of what you would hope for in a, in a socratic discussion. And it made
her go much deeper into the topic than if she just, frankly, if she was just in a traditional
classroom and the teacher said, you know, the symbolism represents this or if she did a Google
search on it. This was a real discussion. One of the other critiques that you sometimes hear
is that there, you can't always trust what comes out of these devices. We, we spoke with Becky
Pringle, who's the president of the National Education Association. She had put together a task
force of early adopters of AI in education. And she said that one of these this concern came up
from them. Let's hear what she had to say. One of the things that shared with me was that as they
talk with other educators, what the educators were saying to them is that they need strategies to
actually address the reality that what we, what we pull up, you know, on the internet through AI,
is not always accurate. And it might be biased. And so what they are working on is teaching
educators how to themselves, how to discern it. And they're also using it as a teaching technique
for students. So that students do not rely on AI. I've seen this myself, certainly asking AI
things that I know something about. And sometimes it's wrong and wrong with incredible gusto and
confidence. How do we address that? That, that what comes through these very confident seeming
technologies is not sometimes not always right. 100% and I actually think what what she just
talked about is exactly the right approach. If if you take it out of the curriculum, the kids,
especially kids, I would say from lower socioeconomic demographics who might not get exposure at
home or whose parents might not understand the nuances of this, they will not have a framework.
So when they see a deep fake online, when they see misinformation or when they're talking to an AI
and it's being sycophantic and it's just reinforcing and maybe even making up facts, they won't have
the toolkit to be able to make sense of that. First of all, I'll say this is not a new problem.
This has existed. There was misinformation well before chat GPT came on, but I do think it is
worse because it, it can be so, so convincing. But this is where it's really valuable. Once again,
not to just throw kids on a chat GPT and say, yeah, have fun with it. But to say, okay,
let's use it for part of this project. And hey, what did it say to you? Can you believe that?
Are there ways that you can validate that? Does that make sense to you? I mean, just the other day,
I was, I was doing some, I was asking it about all the things going on in Iran and I said,
hey, you know, Carg Island, straight-of-form moves and the AI said, well, in my, the US might,
the ships might go to Carg Island before going through the straight-of-form moves. And I said,
wait, doesn't it have to go through the straight-of-form moves to get to Carg Island? And they said,
oh, yeah, you've gotten me. So you absolutely need to build that skill. But if you take it out of
schools, there's no, you're not going to have the mechanism to build that skill. So once again,
it can be useful, but you need to know its limitations. And the only way that we can hopefully make
sure a lot of kids understand those limitations is exactly what that, that teacher, that educator
was talking about, not eliminating it, but putting it in the classroom in context where the,
the kids can, can build those, those critical thinking skills. Your book is, is full of examples
where you show you literally post different chat boxes of a student's interaction with Konmigo.
And explain a little bit why those vignettes are in there and what they demonstrate to this,
to all of these concerns. Well, I think a lot of the debates and education and we saw,
we've seen this well before, the, the, the generative AI debate. We've seen this debate in history
classes and, you know, teaching of evolution, everything. A lot of it is based on hearsay.
Someone hears that, oh, I heard this happen in a classroom. That's really scary. I don't want that,
you know, let's ban that. Let's take my kid out of the classroom, et cetera, et cetera. But we
find when you actually give tangible examples of what is happening and what's not happening,
people start the, the conversation becomes more constructive. And so I didn't want in the book
to just write like, hey, in theory, you could build critical thinking skills. In theory,
you could put, you could monitor the AI. So it's not cheating and it's asking so credit
questions. That's very theoretical. But when you give tangible examples of exactly how that's
happening, then I think the reader can say, oh, yeah, wow, that's, if I, if I or my child had that
type of interaction with an AI, and it prevented me from the other types of interactions,
I could see how that could be really constructive. Yeah, there are plenty of examples where a child
is seemingly asking for an answer. And to your point, it's not being sick of fantic and not
just delivering the answer about where Carg Island is, vis-a-vis the straight-up hormones. But
is saying, well, what is your understanding about that? Do you understand where the straight
is? I mean, I'm making this example up. But there's plenty of examples where it is trying to tease
out from the child as you would hope a tutor would. Exactly. You know, I've used Conmigo myself,
this was, I give this example in the book, where I, I, I understand the basics of a supernova,
but I never understood why he exploded. I always said, well, if a star runs out of fuel,
the fusion stops, why wouldn't it just collapse? And I went to Conmigo, you know, if I went to chat
GPT or Claude, and I asked it, it would just give me an explanation, like a Wikipedia style
explanation. But when I went to Conmigo, and I asked it, why does the supernova explode,
it said, well, before, what do you know about a supernova? And I said, well, I know it runs out
of fuel, and, but I would think it would just collapse. Why is it exploding? And it said, well,
your intuition is right. Do you think it would, do you think it would collapse quickly or slowly?
And I said, well, it's only stars of certain mass, maybe quickly. And then it said, well,
when something collapses quickly or falls due to gravity, does it ever bounce? And then it clicked
in my head. And I said, oh, my God, so you're saying it collapses so fast that it kind of compresses
and then rebounds in Jettison's, and this was me writing it. Jettison's the atmosphere or the
outer layers of the star out into space. And it said exactly right. And, you know, it said, just like
if you put a ping pong ball on top of a basketball and you bounce it, the ping pong ball goes
shooting off into the air. And that has been, you know, I tell that to friends at dinner party,
now, if I just read a Wikipedia article or chat GPT or Claude just gave the explanation,
I might have pretended that I understood it. But by having that back and forth, like you would
with a great teacher, a real Socratic tutor, it's, it really, I really understood it much, much
deeper. I'm still amazed that you saw Condit, couldn't just say to Condigo, look, I created you,
can you just give me the answer? I appreciate the point you're making that it did help, help
solidify your knowledge. In the last minute or so that we have, let's say the parents are watching
this and they have heard that AI is starting to be deployed in their child's school or is being
considered, what are the things that you would argue they should ask about, know about, think about?
Yeah, as a parent, I would be worried about just unfettered use of AI. I actually think AI
cheating is a huge problem in high school and in college. Kids are doing it and they cannot be
detected. Anyone who tells you that AI cheating can be detected is, is, is not telling the full truth.
These, these detectors have a high false positive rate. So as a parent, as an educator,
watch out for those things. Anything that we want to be 100% sure that a student's capable of
doing by themselves, we should be doing it in the classroom and proctoring them and not assuming
that they're going to follow an honor code. With that said, if, if your child does get some exposure
to it and I'm not talking all day, it could be a project here and there or in the classroom maybe
once a week, 20 minutes, but it's in the context of, what is it telling us? How is it useful?
Can you believe this information? How can you double check? Can it be as a cratic tutor to make
you go deeper into a math problem or a science problem or a history issue? Then that's, that's,
that's very powerful. Selcon, the book is Brave New Worlds, how AI will revolutionize education
and why that's a good thing. Thank you so much for being here.
Thanks for having me.
Before we go, we want to shine a light on a quite different trend in education and that's getting
very young kids out of buildings and into the great outdoors. These classrooms go by different names,
outdoor school or forest school, but the idea is simple. You take kids, usually preschoolers
outside or natural lessons about. This one in Midland, Michigan is run by the Chippewa Nature Center
and in just one example, kids catch tadpoles and then frogs as a way of learning about the biological
life cycle. Outdoor schools like this are a small but growing movement and while their numbers
doubled in the years leading up to the pandemic, it's estimated there are still just under a
thousand of them operating nationally. Of course, many traditional schools have long incorporated
some of these same concepts as well, getting kids outside into school gardens or on nature field
trips as often as possible. Studies show that outdoor schools prepare kids just as well for
kindergarten as traditional ones do, but their supporters argue that the messy, ever-evolving
and even at times risky activities offer something invaluable. Here's Jen Kurtz, who helps run the
Chippewa School, describing this to my colleague Jeffrey Brown. In a classroom, a lot of the things
that you have are static and we're designed to be played with in one particular way. The natural
environment changes every single day. The weather changes, the humidity, the they're scat left behind,
there's new footprints, there's leaves that are chewed today that weren't chewed yesterday,
and so there's just a natural curiosity that happens there. All of our existence, kids have grown
up outdoors. That has changed in these current generations. That is it for this episode of Horizons.
You can watch us on YouTube or listen wherever you get your podcasts. If you like what you hear,
please subscribe and give us a rating. It helps more people see this program. Thank you so much
for watching. We'll see you next week.

PBS News Hour - Segments

PBS News Hour - Segments

PBS News Hour - Segments