Loading...
Loading...

In this lecture, we glimpse our best selves and compare that to a world where we lose everything of ourselves to AI. We are glorious creations that revel in agency, freedom and creativity. What do innovations such as cars that don’t need us to drive and creative AIs that remove the effort of, say, writing or music making mean in this context? Further, with a future being forged by limited perspectives, how can human diversity inform better AI for all?
This lecture was recorded by Professor Matt Jones on the 17th of March 2026 at Barnard’s Inna Hall, London
Matt Jones is a computer scientist at Swansea University - and a Fellow of the British Computer Society - who works alongside colleagues from many other disciplines and directly with everyday folk across the world to explore the future of digital technologies. Over the last 30-plus years, this human-centred approach has led to novel approaches for, amongst other things, mobile phone-based information searching and browsing, pedestrian navigation, voice assistants and deformable displays.
Much of his work has been driven by intense and sustained engagements with “low resource” communities from informal settlements in India, South Africa, and Kenya. Through their generous and gracious participation, these extra-ordinary users with the fresh and diverse perspectives have stimulated insights into the future of digital technologies for everyone, globally. In all this work, Matt works as part of a long-standing collaborative team with Jen Pearson, Simon Robinson and Thomas Reitmaier (from Swansea) and colleagues in India (including Dani Raju) and South Africa (including Minah Radebe).
His work has been supported by the UK’s science funders (EPSRC and UKRI). Currently, this funding includes a Fellowship to explore the future of interactive AI and leadership roles in responsible AI and inclusive digital technologies. This funding has led to a series of impactful publications, talks and influences on people, policies, and practices.
Matt has collaborated with private, public and third sector organisations, including Microsoft, the NHS, Google, IIT-B, the BBC and IBM. He is a member of the Foreign and Commonwealth Development Office’s Research Advisory Group and Welsh Government’s AI reviews.
The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/ai-humanity
Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/
Website: https://gresham.ac.uk
Twitter: https://twitter.com/greshamcollege
Facebook: https://facebook.com/greshamcollege
Instagram: https://instagram.com/greshamcollege
And he's going to talk tonight about AI as a pale shadow of humanity.
So, match off you go. Thank you.
Thank you very much.
Thank you very much, Provost, and thank you all for coming.
And those of you online, this talk is also for you.
And Happyton Patrick's Day for those of you who celebrate.
Now, tonight we're going to start at an extreme moment of drama
with a ship, storm, and a body dragged from the sea.
Take a look.
What we're just seeing is Jason Bourne, a very famous movie, being rescued by some Italian fisherman.
Jason Bourne has been shot and he's fallen into the sea.
He's a CIA agent.
Soon after that scene, he recovers his amazing abilities.
He's able to hunt down people.
He's able to scan a room and find the best exit when he's on a mission.
But his memory has been wiped.
Jump forward in the film, not many minutes into that film.
And Jason has gone to a safety deposit locker place.
And he's got a key. He's found this key.
And he opens up the safety deposit locker and he finds multiple passports.
Because, of course, he's a CIA agent.
Each of those passports allows him to navigate a certain territory.
None of them let him understand who he is.
Now, the films and the books that is based on are all about recovering identity.
They're not about him being a CIA agent and saving the world.
It's about him refinding who he is.
And at the end of the books and at the end of the films,
the moment of triumph is when he can say, I remember everything.
He recovers his purpose.
He recovers his autobiography.
Now, I want us to widen the lens a little bit now and think about different types of agent.
And of course, I'm a professor of computer science and someone who studies AI.
So we're going to be talking about AI agents.
Over the last three lectures, this is the fourth, by the way,
we've been considering what these incredible systems,
the large language models, the generative systems,
these things that seem to be able to do all of the things that we felt were part of being human,
but now they seem to do them faster and quicker than us.
And in the previous three lectures, we've considered, well,
is this going to mean that we are going to become second-class citizens,
that we are going to become subjugated by these higher intelligences?
In the second lecture, we considered, well, if we can't beat them,
perhaps we have to join them, and we become assimilated by these new forms of intelligence.
In the third lecture, oh, there's my dogs.
So my dogs enjoy my lectures, by the way.
As you can see, that was the third lecture.
The question we asked last time was, perhaps we're not going to be subjugated,
we're not going to be assimilated, what we're actually going to be is domesticated,
aimed by AI.
In today's lecture, I'm going to change the way that we look at this debate.
When we think about those AI systems, what do we actually admire?
We admire their performance.
You press a button, and before your eyes, with these generative systems,
fluently and precisely comes seemingly a wonderful answer,
or an image, or a video.
What we're seeing in these AI systems, and we'll explore that tonight,
is the performance that we saw in Jason Bourne.
Performance that is incredible, but doesn't have an autobiography.
It doesn't have that continuity over time, and that makes a difference.
So in tonight's lecture, we're going to consider that,
instead of thinking, oh, is this going to be a lecture, which is pro-AI,
or anti-AI, or pro-human, or anti-none of that?
AI is an incredible new technology.
The forms we're seeing now, in terms of generative systems,
large language models.
So we must grasp that technology and see how we're going to use it.
I'm also not going to sort of romanticize what we are as people,
but I do want us to consider tonight the differences.
What is the difference between the trajectory of current artificial intelligence systems,
and the intelligence that is sitting inside you,
and then inside the group of us, as we participate in this practice of having a lecture together?
So instead of starting with AI, let's begin with some people,
some incredible humans who have done amazing things with their human intelligence.
So Marie Curie.
Marie Curie spent many years in a makeshift lab, a shed,
and she grafted physically with materials to understand and to conceptualize radiography.
Or Yoyo Imar, wonderful cellarist.
At least someone who deeply thinks about music,
but also has that integrated with subtle, nuanced, physical skill.
And those of you who've never heard of Marie Curie or Yoyo Imar,
then perhaps Lionel Messi is the one for you.
Lionel Messi, he scans the football pitch.
He can see immediately where he might want to place himself and the ball to win the game.
All of those people are demonstrating what we do as intelligent beings.
They integrate cognition inside the brain,
they integrate emotion, they integrate physicality,
and that integration isn't decoration.
That is what makes human intelligence so incredible
and something that we should use AI to platform.
Now most of us, when we do things on a day-to-day basis,
we aren't simply carrying out processes.
What we're actually doing is taking part in practices.
So chess computers have been around a very long time.
They can beat almost every single chess player.
Would you hand it if you still play chess?
Yeah, chess has not gone away because it's not a process, it's a practice.
With rivalry, with history, with tournaments,
with interpersonal communication across the board.
All about autopilots, autopilots in aircraft.
Amazing, they can carry out a process to stabilize the plane.
Anzab, if you'd like to fly it in a plane with no pilots.
No one. Oh, there we go.
Well, there is a test that I can do and you will be signed up.
Well, most of us don't.
Cars.
Flying a plane isn't a process, it's a practice.
A practice with mastery by the pilots.
A practice of responsibility and accountability that requires a human.
So chess, autopilots carry out processes.
What we do is to enact practices.
And those practices build up over time.
And they require us to enter fully into the world
and build and understand that world,
build up memories and then act together collectively.
And that's what we're going to look at tonight.
These are three cognitive architectures inside of all of us.
And we're going to see how these work together
to enable amazing intelligent actions and interactions and collaborations.
As we go through each of those, we'll also have a look at the ways
in which state-of-the-art AI currently handles perception, memory,
and collective intelligence.
So we can see where there is resemblance and where there is deviation.
So let's begin with perception, how we take in the world and make sense of it.
Now, for a very long time, there was a thought that we look out
in a clear windscreen.
And the world comes in and then your brain wears and it makes sense of that world.
Modern neuroscience by people like Carl Friston
suggests actually what you're doing right now is hallucinating.
Do you feel like you're hallucinating?
You are.
You're hallucinating under constraint.
So your brain is trying to predict what I'm going to say and do.
So your brain didn't get that right next.
And that prediction, that hallucination is modified
by the information that's coming through our senses.
Now, we can demonstrate this, I think.
This architecture with a couple of simple optical illusions.
Here's a very famous one, which caused a lot of controversy a few years ago.
So take a look at that dress.
Would you raise your hand if you now see that dress as white and gold?
OK, and stand blue and black.
Whoa, it's the same image.
Here you are, human intelligence, and you see it differently.
Now, there's been lots of debate when that optical illusion came out.
And a current thesis is it's to do with what your brain
is predicting the assumptions that you have.
So in a very big study, Pascal, oh, Pascal Elliott, I think his name was,
got a large number of people and asked them whether they saw it as white
and gold or blue and black.
And you can see, as with this room, the majority saw it as white and gold.
He then asked them, what assumptions have you made about the lighting on that dress?
Did you assume that the dress was in shadow or not?
And the ones that assumed the dress was in shadow tended to see it as white and gold.
So your brain's assumptions affects the way in which you interpret the world.
Here's another one.
Watch this figure.
Hands up if you see that figure going in a clockwise direction.
Hands down.
Hands up if you see it going in anti-clockwise direction.
A few.
And now, see if you can flip the direction.
If you stare at the ankles of the dancer, you might see her flip in the opposite direction.
I can see some nods.
Isn't that amazing?
It's the same figure.
The data that's coming into your brain is ambiguous.
So your prediction model is trying to cope and make the best guess.
Fascinating, isn't it?
Now, what about these things called large language models?
And by that, I mean, the kind of things that many of us started to use, like chat GPT
and Gemini.
What do they do?
Do they do anything similar to us?
Well, in a sense, they do.
When large language models are being trained, they're presented with millions of examples
of text.
And during the training, they try to predict what is going to come next, just like you
are trying to predict what I'm going to say next.
And if they make a mistake during training, they are just millions of parameters in their
model, so that next time, they're more accurate.
It's called gradient descent, in a way of fine tuning, you might see a lot of nods being
twiddled to improve the accuracy of the model.
But crucially, primarily, this updating of the model happens only during training.
And when the model is fully trained, it freezes.
And it won't be updated fully until the next time it is completely retrained.
Meanwhile, right now, your brain is doing millions of little calculations to update your
world model of me, of this talk, of ideas in real time.
Now, we had some optical illusions a few minutes ago, and I want to continue with flickers
and shadows, and go way back in time now to 350 years before the common era.
Whereas Plato, in the Republic, gives us an amazing way of looking at how we perceive
the world versus how AI perceives the world.
Now in this particular allegory, what Plato said, I want you to imagine prisoners.
You can see some of them there, and they are chained, and they're staring at the wall
of a cave.
Behind them there's a fire.
And people carry objects past the fire like I'm doing now.
But the only thing that the prisoners can see are the shadows.
They never get to see the real objects, just the pale shadows of reality.
Now what Plato was interested in was saying, you know, we can't trust our senses.
We need to turn from the wall, and we need to go out into the world and really understand
what is happening, if we want to understand the world as it is.
Now what does this tell us about, perhaps, large language models?
Well, I want you to imagine large language models ask those prisoners.
They are sitting looking at a wall, and they are processing while they're being trained,
just pale shadows of our actual existence, traces of conversation, the final output of an artist's production.
They never turn towards the fire.
They never leave the cave.
They never engage with the world.
And that reduces what they can do in terms of understanding and perceiving the reality around them.
Let's explore this a bit further.
We're going to try an experiment where partly you're going to behave like a large language model,
those models that can predict the next word.
And hopefully we'll also see how you differ because you have experienced the world.
You have come out of the cave, you've walked past the fire, and your full senses have had an experience of life.
So this is an easy experiment.
You're going to see some phrases appearing, and when you see dot, dot, dot, your job is to show it out the answer.
Okay, this is great. Are you ready?
Excellent. First one, start easy.
Excellent, large language model does exactly the same.
Let's try this one.
In some of you even got there before the dot, dot, dots, your prediction models are running hot.
That's wonderful.
Now, let's try this one.
Okay.
Yeah. Okay.
So a pause, not immediate, and I could hear different answers and slight hesitation.
Now, the large language model probably would come up with many of the answers that you've just added.
But I wondered how many of you also visualized the tomato, the sharpness, the nice.
Did you feel it going into the skin of that fruit?
Did you see the juice or feel the juice egg out?
Let's just take this one one.
This might bring back a trigger warning.
This might bring back bad memories.
It did for me.
You ready?
Close your eyes if you don't like it.
Yeah, exactly.
So what?
What did you forget?
All sorts of things, right?
What did you say?
Trousers.
Okay.
Correction audiences are the best.
And large language model, I tried this.
Most likely thing is going to be the pen, okay?
And so the LLM is to predict the pen, but you feel the dread.
And it isn't just me being a kind of romantic about that.
You really do.
Inside your brain, when you were processing those sentences,
when they've scanned people's brains doing that, the areas, your sensory areas,
not just your linguistic areas are firing off.
Your experience of life can wrap around your linguistic concepts.
Let's just have a little break and then see Jason Bournagay.
Very famous scene this.
He's in a train station.
And you can see his incredible capabilities.
He's looking for his assailants.
He's trying to find the exits.
He's a predictive machine.
But he's predicting under extreme consequences, right?
If he gets the answer wrong, he could be dead.
And that's the difference really between what large language models are doing
and what we do.
Our predictive systems, when we perceive the world,
are helping us to survive.
Karl Friston, who I mentioned a few slides ago,
in his theory about how our brains predict and then change their predictions,
says that we have evolved in that way to reduce the amount of surprise,
our body, surprise, sorry.
Wish you wouldn't set them there.
Get.
We'll come back to this.
It's important because people say that AI is incredible, it will innovate,
and it is creative.
And yes, we can see how we can do that and can help us.
But because you've lived an experience under constraint over time,
you have a cognitive architecture that's going to enable
surprisingly different answers if you let it.
So now let's move on to the second cognitive architecture.
We've done perception now, memory.
And as before, we're going to look briefly at the different types of memory structures
that are in your brains and then think about how they relate to these generative,
frontier models, these large language models.
Let's try a very simple experiment to illustrate working memory.
Can you see that?
Got it in your memories?
It's gone.
Right, put your hands up if you can remember that.
You think you could remember it, very good.
Oh, too.
Well, provost, of course you can.
That's why you're the provost.
Let's just flip that a bit here the same again.
Now, probably many more of you could tell me that string.
What this shows us is the working memory is this short storage that we have in our brains
to allow us to carry out our tasks moment by moment.
And the neurosciences show that you can hold modern studies show about four chunks
give or take.
When I showed you that long string, each of those characters was a little chunk and there
were too many of them, so your brain just gave up.
When I put them into little groups, you had four chunks and you could remember them, most
of you could.
Let's now think about large language models.
The working memory of a large language model, like chat GPT, seems much more capable.
It seems huge, it's called the context window and it can contain the equivalent of hundreds
of pages of a book.
So it seems like it's better than this, it seems like it has this born supremacy.
But there is a big difference.
Take a look at that chunk, red riding hood, just one chunk in your working memory.
Your brains right now are using that chunk to reference vast amount of content.
Maybe the whole story, the nursery rhyme, maybe the time, and I'm looking at some of my
family here, where they acted out being a little red riding hood, test.
Maybe the emotional feeling that you had when you sat with your parent or grandparent
and had that story read to you.
All of that through one little chunk.
Meanwhile the large language model, it can't do that, it has to have everything in front
of it, it has to see all that text in its working memory, you've got one chunk.
There is a similarity though between the way in which these models use the working memory
and the way we do as well.
So we saw that we can lose track.
You lost track when I showed you that string of nine to ten characters.
Large language models also lose track, it's called the lost in the middle problem.
So the way to think about this is that if you go to a meeting, I'm sure we all have,
I was in one today, took six hours this meeting, I can sort of remember the start of this
meeting, and I can sort of remember the end of this meeting, but there's a bit in the
middle, which I've got no idea, and the same thing happens.
When your context window in a large language model gets very full, the way in which the
system can distribute its attention, probabilistically tends to favor the start and the end of the
prompt and the context window.
And if there's vital information in the middle, it could be easily lost, and that's where
you can see the kind of hallucinations that we've heard about when systems are generating
texts.
Okay, working memory, now semantic memory.
Have a look at these three very simple concepts, and I need you now to put your hands out
in front of you like this, well like this, and I want you to think that this one is Queen
and King, and this one is Apple.
This is a bit complicated, isn't it?
Right, now I want you to position your hands in 3D space that show me where you would
put those concepts to make meaningful sense of them.
If people are looking at me like, I don't get this, so this is like, where, move them
around, where are you going to put Apple?
Where's Apple going?
Right, right, right, okay, let's have a look.
If I'd explained it better, you might have put Apple a long way away from Queen and King.
And this is what happens in the semantic memory of large language models.
When they are being trained, the text that is coming in is sliced up into what are called
tokens, usually smaller than words, but you can imagine the miswords.
And then those words are converted into a sequence of numbers, it's called a vector,
and that vector places that token somewhere in multidimensional space.
And with enough text and then I've training things cluster so that relating concepts are
mathematically connected.
So what a large language model does is to turn meaning into maps, turning the relationships
it's seen in that a text into multidimensional space in a model, and then you can do some
mathematical reasoning with it, like the example I've put on the slide.
Inside our brains, our semantic memory, there are some similarities.
We have an associative memory, so your brain will store and place related concepts close
to each other.
The work of Laurence Barazalu and others show that again, the way in which our brain works
with that memory is very different.
So large language models, meaning is turned into maps.
Inside your brain, when I say the word orange, it isn't just a word.
But we saw previously with the working memory, it is a rich association of concepts, of
senses, of episodes in your life, perhaps when you squeeze an orange, very rich web of information.
Modern large language models don't have that richness.
Now, it's a very, very hot topic in research to try and do what is called grounding these
models.
So our semantic memory, and in fact all of our memory, is grounded.
That means we go out into the world, we experience the world, we pick up an orange, we bite
into it, we feel the juice.
The concept, the word orange, is not just a word, it's an experience.
So there's really, really many research groups thinking, how can we do this for large language
models, how can we ground it, bringing in multimodal perspectives, using robots to go
out into the world and experience the world.
But we're a long way off, never say never as a scientist, but at the moment of trajectory,
we are very different to these large language models.
Now, I'm not going to spend long and procedural because Jason Bourne, he's procedural, he's
got these abilities, his memory's been wiped, but he's still able to operate.
Procedural memory are the things that we've learned to do, like riding a bike, like cutting
a tomato, and we do them without thinking about them.
AI systems, particularly robotic systems, are also able to learn new tasks.
They can watch us and imitate us, they can learn how to grasp this bottle.
But what they don't do, because they have no autobiography, they don't have any sense
of mastery.
When you learn to play the piano, you're not just learning how to clonk the keys, it's
becoming part of your narrative identity.
And you can do that because you have episodic memory, and I like to spend some time on
that, because that's kind of the linchpin of everything.
Oh, that's a picture of this place, taken a long time ago, can anyone remember it when
it was like that?
Some of you might, and over using your episodic memories, episodic memories are, what's
just like episodes of a show?
You are remembering things you have done in your past, and you are bringing them back
to life.
Now again, for a long time, science felt that this is a bit like going into a video store,
picking out a video tape, that's how old I am, I remember video tapes, putting them into
the video player, pressing play.
So they saw episodic memory as a recall process.
But the surprising thing is that the architecture in our brains that allow us to predict the world,
as we take part of it right now, is similar to the one that is used when you think about
the past.
So we're not really retrieving a memory, we're recreating it.
What your brain does is tightly compress the things that you've experienced.
When you recall them, little fragments bubble up, and then your amazing predictive processing
system thinks, oh, how can I put these together to make a meaningful story.
Now we know this is true because people like Elizabeth Loftes has done some incredible
experiments.
We look at just one such experiment.
Okay, so what participants were shown were videos like this, and once they'd watched
a video, for some of the participants, Loftes and her colleagues would say, how fast
were the cars going when they smashed into each other?
And for other participants, they said, how fast was the car going when they hit into
each other?
What kind of effect do you think that had on the memory?
Does anyone want to suggest?
Yes.
Yeah, leading the witness.
Yeah?
Well, the ones that heard the words smashed would report the speed of the cars as much faster
than the ones that heard the word hit.
They watched the same thing.
But their memory was malleable because of the brain was sinking smashed, smashed, must
have been fast.
That's a reasonable assumption.
A couple of weeks later, they brought the same participants back and said, you show, would
you like to remember that video I showed you two weeks ago?
Would you like to tell me what you saw?
And if you had been given the word smashed, you would say, it was terrible.
There was debris everywhere, there was glass, there was blood.
If you said, if you heard the word hit, then you would just say, well, it was a minor accident.
So our episodic memories are being recreated using prediction engines, same prediction
engines we use in real time perception.
Another really fascinating set of experiments is, let me just try this for you.
If I say to you now, tired, bed, cocoa, snore, hope some of you are not snoring, doing
this lecture.
Okay, so there were a set of words.
I say, off you go, come back on the 21st of April when my next lecture is, and then
I come up to somebody and say, what words, sir, do you remember?
Coco.
Coco, excellent.
Well, what study showed was that people could remember all of the words they were told,
but they also would say, sleep.
Because the brain was thinking, oh, all of those other words, associate with sleep, I must
have heard, sleep.
Now, why does all of this matter?
Well, first of all, it allows us as human intelligences, new acronym for you, high human
intelligences to do what Endal Toolvik calls mental time travel.
You know, we scientists use a complicated term, it just means you can think about the past.
All right, you can go to the past and you can reflect on an event.
And that reflection can allow you to think about future possibilities and create and be
purposeful.
And here's a fascinating finding.
The brain systems that you use when you are recalling the past are from scans when they
put people into these scanners, the same structures that are used when you're trying to stimulate
or imagine a future.
That's something.
At the moment, episodic memory, there is nothing like it in artificial intelligence.
People are trying, again, to push the boundaries by creating persistent world models for artificial
intelligence systems, but again, never say never, but at the moment, we're a long way
from that.
Now, so we talked about perception and we've talked about memory.
Now, I want us to bring it back to, if you put those things together, what do they enable?
Or what they enable because we can share a history and we can share stories.
It enables us not to be individual intelligence, but to be collectives.
Let's have a look at some examples.
The thing struck me when I saw that I thought Theresa May was in the clip, it's not, is it?
What we've just seen is a common place if anyone goes to concerts of any form.
We've seen people in a practice together playing in an orchestra.
And yes, they're second-by-second predictive systems, the perception I talked about, that's
clearly important and you get this amazing synchronization by different members of the orchestra.
But actually, the episodic autobiographies of those individuals over a long period of time
actually scaffolds those performances.
They've shared rehearsals.
They've shared lives.
They've come to create norms about what makes for a good performance for them.
Or take the work that I'm involved in science.
Recently, there was a report that said, a super AI researcher is coming.
There's going to be better than all of the professors in the world.
So I thought, oh gosh, that's me redundant then.
But it misses the point.
Because science isn't about finding that one individual brilliant person.
Of course, there will be.
But science is a practice,
done and a constraint with consequences.
Right now, I'm doing it, right?
I spent 30 years building up my reputation.
And any second now, it could crumble into dust.
And because I have that constraint and those consequences,
then that is going to push the kind of things that I might do.
I will try to do things with conviction and with commitment.
We know, of course, don't we, that these models, the large language models,
can ingest all of science that has ever been written,
all of poetry that has ever been created,
every book that has or will be written, every piece of music.
But what they then do is to perform processes.
They're not part of the collective practice.
Because they're ingest, they're not a member of this collective.
They're not carrying out their processes under constraint with consequences.
And yes, they've read everything that my colleagues and myself have ever written.
But they're not part of that collective.
Aggregation clearly isn't membership.
And the last few minutes of this lecture before we go into question and answer,
I think I've got to address this question.
You might be sitting there and saying, okay, we get it.
We've got brains, we've got amazing cognitive architectures,
and we're different from these generative AI,
different from the large language models.
But does it matter, Matt?
If we can have one of these systems,
these AIs that can do better diagnosis,
or better policy writing,
or better dissertations for your final year project,
does it matter if it is a very different type of intelligence?
Well, let me try to convince you that what you need to focus on is that word,
better.
Large-language models, generative systems,
can produce the most optimal answer, the best answer.
But consider this.
I want you to imagine that there's a world where terrible injustice has been defeated.
There's a world where one group of people was excluded
while a small group of people thrived.
And your job is to try and put that society back together again.
Well, with an AI system, we could absolutely feed in masses of data
about incidents of violence and incidents of calm.
We could put in all of the stories of people,
and we could press a button,
and our large language generative system could come up
with the best optimal solution for a stable society.
But would it come up with the right answer?
Because in the actual world, when a partite had crumbled,
there was an incredible process and practice.
It was called the Truth and Reconciliation Committee.
If you want to read about it, that's the book that you should get.
It's been out, obviously, a very long time.
And in that book, you heard the stories of people who had memories.
You heard people who were held accountable for their actions.
It wasn't just a process, it was a practice of moral repair.
So what are AI systems better at?
Well, I think as we move into thinking about future AI systems,
the question for many of us building these systems,
and I would say for all of you using the systems,
is not to be anti-AI.
That's mad.
There's some amazing new systems out there in the world right now.
And in the next five years, even more incredible capabilities.
But those systems we would like Jason Bourne when he's pulled out of the sea.
Memory, no memory of what he is there for.
So this talk, I hope, has helped you think about not being anti-AI,
but being pro-agency, being pro-human.
So we can build systems that enable us to be kept accountable,
to be responsible, and to bring our full world experiences
as we shape the future.
I want us to come back to the beginning of the talk.
I don't know, particularly the younger people in this audience.
And maybe some of us older people.
When we see this, and we read about the incredible abilities of AI,
we can get a little bit despairing, I think,
and ask that question, what are we for in this blazing hot AI summer?
And the answer is to return to Jason Bourne.
Jason Bourne only became purposeful and full when he recovered his auto
biography.
The machines that we're building can give us answers.
But only you and you and you and me can live with the consequences.
And as we live with those consequences,
we make decisions with conviction, with commitment, bringing our full selves.
And if we do that, then we can answer the question.
That is what we're for.
