Loading...
Loading...

This is Lobbytes, a podcast with Michael Geist.
On the Roblox platform, chats and interaction
are the name of the game.
And for Safety Online, Roblox has introduced
an age estimation tool.
It's required for teens to access unfiltered,
private messaging and voice chats,
and it helps to ensure online acquaintances
are the age they say they are.
Roblox using selfie technology
and AI systems to try to determine
whether a player is over the age of 13.
Obviously, we can't estimate someone's age perfectly,
but we can get really close.
Age verification, estimation or inference
is seemingly all the rage right now.
Vendors are promoting it as the solution
to thorny challenges, to limit access to certain sites
and services, and politicians seem eager
to legislate in that direction.
As listeners of this podcast will know,
the Canadian experience with these technologies
can currently be found in Bill S209,
which would mandate its use across a wide range of sites.
Hundreds of scientists and technology experts
from around the world have taken note of the trend
and come together to issue a public letter warning
about the privacy, safety, and discriminatory risks
associated with these technologies.
Ian Goldberg, who holds the Canada Research Chair
in Privacy Enhancing Technologies
at the University of Waterloo,
was one of the signatories.
Ian has long been engaged at the intersection
between technology and privacy,
and he joins the podcast to discuss the tech,
how privacy enhancing technologies
could address some of the concerns
and the risks with the current legislative approaches.
Ian, welcome to the podcast.
Thanks.
Thanks for coming on.
You know, age verification and age assurance technologies.
I really, I think, one of the hot issues of the moment.
I've covered it fairly regularly in the context
of Bill S209, that would mandate these technologies
to access adult content.
Unfortunately, I think some of the broad definitions
in that bill means that they'd cover
far more than adult sites with search engines,
social media sites, and EA AI services,
even potentially targeted.
Now, that bill is still working its way through the Senate.
It may ultimately still go to the House,
but one of the big lingering issues, I think,
involves the technology itself
and the risks that it may pose.
Now, scientists around the world
have begun to speak out on this issue,
and you were one of them.
One of hundreds of experts
that signed on to an open public letter
that laid out the concerns with these technologies.
You know, I want to get into some of the concerns in the letter,
but if you can't provide a bit of a background
on this initiative that led so many scientists
and experts to speak out.
Yeah, so you've, as you've said,
Bill S209 is an initiative in Canada
toward age verification, age assurance technologies,
but of course, this is happening
in many jurisdictions around the world.
So we've seen it come up in the UK.
It's been proposed in Europe.
Many US states have,
a few have already been enacted
and several more are in the pipeline.
So this is happening all over the place.
But because, in particular,
Europe was starting to talk about it
a team of researchers that are mainly centered
in Europe, got together to put together
this letter, raise concerns,
particularly about mandating technologies
whose problems outweigh their benefits.
I'll link, obviously, to the letter
in the show notes.
There are hundreds of people
from around the world that have signed on.
I want to touch on some of the policy concerns, of course.
But before we do that,
let's deal with some of the technology related issues
It's relatively rare for me
and it's a real treat to have someone
who's got some of the technology chops like you do.
So when we're talking about age assessment,
as you called it,
or age assurance technologies,
what are we talking about?
How do these technologies typically function?
Yeah, so they're largely,
you can divide it into three categories.
Okay, so let's talk about each one.
So the first one is commonly called
age verification technologies.
So this is usually something along the lines of
you upload your passport
or your driver's license
or some government issue ID.
You upload it to a site
and they somehow check that it's valid
and certify that you are this old.
Right?
Of course, all they're really certifying
is that you uploaded a photo
or maybe there's some better technology
to prove you didn't forge the photo
or something like that.
But all it really proves is that
at some point you had access to the ID of an adult.
Right?
Or of a child.
We'll talk more about that later.
But some of these technologies
are meant to be used to prove you are a child
in order to give you access to, like,
child-only chat rooms,
like in Roblox, for example.
Right?
And that has a lot of
more complicated concerns,
which I'm sure we'll talk about later.
So age verification is about
taking some usually government issue document
and verifying that
the document says you are a certain age
or that the owner of the document
is a certain age.
It is a completely separate question to ask,
is the person sitting in front of the computer right now
the same person that it owns the password?
Okay.
So the second category is age estimation.
So age estimation directly
tackles that question about
the person sitting in front of the computer.
What it does, it turns on your webcam
and it looks at you
and it uses some algorithm
to say you look like an adult
or you look like a child.
The accuracy of those is, of course,
questionable.
How easy is it to trick as questionable?
We'll shortly talk about those in more detail later.
But that's largely the second class.
It's just looking at a picture or a video of you
and just trying to guess how old you are.
At least broadly, right?
It won't be too specific.
Of course, when you're near the borderline,
it's going to be wildly inaccurate, right?
And then the last one is age inference.
So age inference is where they collect
a lot of information about your behavior.
So what websites you visit, what links you click,
the contents of your chats, perhaps.
So it uses all kinds of information
about your online activities
in order to try to guess how old you are.
So these are largely the three classes,
age verification, age estimation, age inference.
As I said, the mandates are already in place
in some jurisdictions.
So some of these technologies have already rolled out
because they're legally required to.
There have been some spectacular failures
that we'll surely talk about in a bit.
And of course, if more countries or states mandate
these technologies, their deployments will, of course,
only get more profit.
Let's talk a little bit more.
I just want to drill down on a couple of those.
With respect to the age estimation,
now the use case,
at least that we see with much of this,
is to sort of essentially say,
you're either in or you're out,
you're either an adult or you're not,
or as you mentioned,
is something that doesn't get a lot of attention.
You're a kid or you're not.
Just to be clear,
just how accurate can you possibly be
what you're trying to estimate, say,
between a 17-year-old and an 18-year-old?
That would be difficult to do in human terms
when you're looking at a classroom
where you may have a mix of students.
You wouldn't necessarily know how old they are.
How can a computer do it even better
in terms of just a photo that's taken in this kind of context?
Yeah, unsurprisingly,
almost certainly not better at all, right?
So it can probably
sensibly distinguish between a 40-year-old and a 13-year-old
and possibly some more narrow ranges than that.
When you're near the edge,
as you say, even a human has no chance.
And even worse,
they're easy to trick.
So if you are actively trying to deceive the algorithm,
it will be easy to do so.
It also does strike me,
and this kind of can lead us
into some of the privacy-related issues
that one of the things that I imagine
likely happens in situations
where it is difficult to sort of come up with a number
that is reliable,
is that you begin to look for other kinds of information.
The exactly sort of the inference-related issues
that you just started,
that you talked about as another mechanism to do this.
I'm wondering,
do you start finding that there was a blending
of these different approaches
where, if it's more difficult,
just based merely on an estimation basis,
well, then you say,
all right, let's take a deeper look
as to everything we can know about this person
to try to get a better sense of how old they likely are
based on the kind of language they use or the sites they visit,
or perhaps require some actual identification,
in which case you're suddenly now layering in
all sorts of additional privacy issues
as part of an effort to try to figure out
how old the person happens to be.
It's true that,
you'll probably do some combination
of these three kinds of age verification,
age estimation, and age inference.
You might try,
like, age estimation,
if you're clearly 40,
you might say that's good enough,
but if it's,
if the algorithm decides it needs more information,
yeah, you might have to upload your passport
or it might go through all your emails
or your chats or whatever.
So it clearly raises an assortment of privacy issues,
say, well, I'll have a chance to talk a bit more about that,
but your area of expertise,
as the Canada Research Chair in Privacy Enhancing Technologies,
in many ways, is the flip of this.
It's using technologies to seek to provide
or enhance privacy.
Can you talk a little bit more about
what privacy enhancing technologies are all about?
Yeah, sure.
So privacy enhancing technologies
or we call them pets, for short,
are, they're broadly what they say on the tin, right?
They're technologies that help people, individuals,
typically protect their privacy,
usually in online settings,
is where we focus our attention.
The most commonly used privacy enhancing technology
in the world is just the HTTPS secure web browsing
that you, your listeners,
are almost certainly using right now, right?
So like 15 years ago,
most websites were accessible only in a completely unprotected,
not private fashion.
So like the plain HTTP,
if you look at your URL bar and your web browser, right?
But then in 2013, the Snowden revelations
pushed technologists to deploy
the more secure and private HTTPS,
the S stands for Security Technology,
quickly and widely.
And today it's actually pretty weird
to encounter a website that doesn't use HTTPS, right?
So that was one big success story
of deploying privacy enhancing technologies.
But largely, indeed, what I study
are algorithms, protocols, building systems
to allow people to communicate, transact,
be online, be in online spaces,
while being able to protect their privacy
in where that is taken very broadly, as I said before.
How, if at all, do pets play a role
as part of this discussion around age assurance technologies?
Have they been factored in to age assurance,
age estimation, this technology's all together?
As you pointed out earlier,
there are a lot of privacy issues
with some of these technologies,
and privacy enhancing technologies or pets
can be used to protect your privacy
while these age verification technologies are in use.
So as an example,
so let's look at age verification
where they're looking at your passport
or something like that.
You could at least in theory use
a privacy enhancing technology there
to allow a person to prove that, for example,
they have a passport in their possession
that shows they are over 16
or whatever age your cutoff is,
without revealing any other information,
including their identity
or even their specific age.
So your computer
will interact with your passport
and basically upload a proof
to the service that
it interacted with the passport,
it decided you were over 16
or at least the passport is owned by a person
who is over 16,
and the site can check that proof
and be assured there was really a passport there,
you don't have to trust that the computer isn't lying.
The proof actually is sound.
That would be useful.
Of course, like I said before,
checking that the person sitting in front of the computer right now
is the same person that owns the passport,
completely separate question.
If that could be done at all,
and that's a non-trivial if,
then we know that using privacy
to enhancing technologies,
you could do it in a private manner.
So if you had some algorithm
for looking at the webcam or something,
and if you could prove that
that was the same person as owned the passport
and the passport says you were whatever age,
you could prove all this privately.
We call it in zero knowledge,
the details aren't important.
But the upshot is that any computation or analysis
that can be done on the end user's device
that they control could be enhanced
with privacy enhancing technologies to be done privately.
But the tricky, well,
there are several tricky bits,
but a tricky bit is that requires the end user
to have a device they trust in the first place
and possibly a pretty powerful one at that.
So it's maybe not for everyone.
Especially possibly in the scenario
where you're trying to prove you are a child,
children are less likely to have personal devices.
They're more likely to be using family computers
or something like that.
And that is a whole extra problem.
And of course, while privacy enhancing technologies
might work for some kinds of age verification like this,
it also doesn't really help that much
when age inference is being done using data collected
and especially collected non privately
about the user and stored on a server somewhere.
There's potential there,
but clearly challenges along the way.
Do we see to your knowledge any of the attempts
to mandate these kinds of technologies
account for the aspect that pets might mitigate
against some of the privacy risks?
Or for the moment is the approach basically.
You've got to employ some of these technologies
and it's left to the providers to decide,
I suppose, whether or not they're going to embed their tech
with some of the kinds of assurances
or some of the protections that you've just talked about.
Yeah, so none of the laws that have passed to my knowledge
mandate the use of privacy enhancing technologies
as part of the age assurance systems.
In kind of related, like Europe has these proposals
for the European digital identity and things like that.
And those actually are more privacy friendly.
So you can do the kinds of privacy preserving proofs
I talked about in order to prove things about your identity
privately, but the European age assurance laws
are not even yet written to my knowledge.
The point of this letter was to kind of pump the brakes
on that before it got too far.
Your focus is on the pet side sort of using technology
to try to address some of the privacy related concerns.
Or can you highlight some of the other privacy related issues
that might arise even beyond the fact
that you can mitigate against some of these issues.
What are some of the other privacy related concerns
that have been identified associated with these technologies?
Yeah, sure.
So like we intimated lots,
so let's talk about some examples.
So the potential privacy harms,
they'll vary according to the type of age assurance technology
we're talking about.
So with age verification,
you're checking typically some government document,
usually by uploading it to some server somewhere.
And we saw a spectacular failure of privacy
with discord a few months back
where they were collecting people's ID.
And it wasn't discord,
it was a third party service.
Because that's how all of this works.
The sites you're actually interacting with
aren't typically the ones doing the age assurance.
They just contracted out to some third party
who you've never even heard of.
Right?
And so you're uploading your IDs to them.
And then what happened?
They got hacked.
And they were storing everyone's IDs.
And now everyone's IDs were stolen by the attackers.
Right?
So that's not great.
With age inference,
the service is of course collecting lots of information about you
in order to make privacy violating decisions about you.
Right?
So that is unaccountably
or in ways that violate your autonomy.
And like I said,
usually this is by third party service providers.
They're not the site you're actually trying to use.
And use an end user typically have no idea
who these organizations are
that are collecting your personal information,
let alone how they're using it
or how their use will affect you.
Right?
There's also the problem
that governments could see privacy tools,
such as virtual private networks or VPNs,
as methods that people could use to protect their privacy,
but also to bypass the age assurance privacy harms.
Right?
The governments could then aim to restrict
these general use privacy tools like we saw the UK government
proposed last month for VPNs.
That kind of went under the radar for a lot of people.
The letter actually links to that announcement
from the UK government that they'll be looking into
restricting VPNs.
The mandating of privacy harmful age assurance technologies
that has a knock on problem
of reducing online privacy for things
that have nothing to do with age estimation.
Right?
And also remember that the whole point of these systems
is to distinguish children from nonchildren.
So they're going to be collecting a large amount
of sensitive personal information about children,
which is even more problematic.
That's a big list.
It's easy to immediately see
where some of the harms arise.
And incredibly, the concerns that have been identified
when people talk about these technologies
and these issues, including in the letter,
aren't even limited to privacy.
There are references to things like discrimination
and risks to online safety as well.
What are some of the examples that might arise
in that context?
Yeah.
So the letter talks about a number of such other concerns,
not just privacy.
So depending on again,
the type of age assurance being used,
some people may not have the required kind of ID.
Right?
Not have a smartphone or a particular kind of smartphone.
Not even have digital literacy.
Right?
So requiring age assurance to use a site
will walk out many adults who, while not being children,
do not have the ability for whatever reason
to complete the age assurance process.
Right?
So examples used in the expert letter,
including the elderly visitors from another country
without the expected government ID,
the incarcerated, people who can financially afford it,
and so on.
Right?
As for online safety,
the very last thing we want is for children
to be unable to access mainstream sites
because they've added these age assessment technologies,
and then they get pushed towards fringe sites
that don't participate in the age assurance system,
but also don't have protections against things like
malware, scams, and misinformation
and then don't fall under the data protection laws
of jurisdictions like Canada and Europe.
It's notable that a number of the examples you've raised,
both when you talk about privacy,
and then you start talking about online safety and harms,
that ultimately it's kids who are, of course,
supposed to be the beneficiary of this.
I mean, so much of this is designed to protect kids,
and it sounds like there are ultimately
some really significant risks that actually
are specific to children who may find themselves
the most vulnerable coming out of some of this.
You've got expertise on the technology side,
just how feasible is the deployment
of this kind of technology at all?
Right?
So you get politicians, and in the Canadian context,
obviously so far, it's been senators,
who are running on the assumption that, of course,
this is doable.
Aged on at times, it feels like those
that actually sell these technologies,
and they sort of say the industry associations
that say, hey, our members stand ready to do this.
All you have to do is legislate it,
and we're good to go.
But once you talk to people with some technical expertise,
it gets a bit more complicated.
What is likely to happen,
or what are some of the challenges that arise
from a deployment perspective?
Even aside from the situations where parents
actively help their children circumvent the age assurance deck,
so there's now lots of documentation.
Now that Australia, for example,
has implemented the social media ban for under 16s.
Even, apart from that,
the current technologies have proved pretty easy to get around.
Right?
And as the letter notes,
even when the children themselves,
like it's not just that the children themselves
are figuring out how to get around this,
it's either older children or adults create instructions
or offer services to help them.
Right?
So the children who want to get around it
just have to be able to find these instructions
or use this website
in order to get around the age assurance technology.
We even saw examples,
like I said,
a gestimation technology
where they're trying to figure out your age
from the video of you,
super easy to fool someone like
drew a fake mustache and beard
and tricked the system that way.
And when age verification is used
in the other direction to prove you are a child,
it's even less effective, right?
Because now you have adults
who are trying to trick the system
into thinking they are a child.
And that's more straightforward
because the adults have more resources
and more ability to do this.
And worse, this results in a false sense of security
where parents wrongly think
that a particular space is safe for their children.
Yeah, I mentioned that much of this has unfolded
with politicians basically saying,
well, you know, if you build it,
it'll work to do a bit of a riff
on the sort of the field of dreams approach.
You know, why don't we conclude with this?
I mean, it's obvious that the experts
have real concerns with this.
And I have to say that, you know,
I'm having closely followed the debates on S209
and its predecessor legislation.
The sum of these issues,
not all of them.
I think this has been a really helpful discussion
to identify a number of issues that frankly
haven't come before committee.
But some of them have.
And the response, I have to say invariably,
is that the technology can be fixed.
You know, what Cory Doctor sometimes would say,
just nerd harder.
You know, work harder, tech people.
You can fix some of these issues.
You've been sort of at that intersection
between policy, technology,
and the law for a long time,
especially from a privacy perspective.
What are some of your thoughts on how politicians
tend to deal with technology?
Often not looking to deal with technology as it is,
but rather it seems as they wish it would be.
Yeah, so one big issue here is that
they're trying to solve a social problem with technology,
and that pretty much never works.
Right?
The social problem that they often cite is
like children's mental health or safety
from predators and things like that.
The letter, in fact, cites some scientific studies
that report no evidence that banning teens
from accessing social media improves their mental health outcomes.
The letter posits it's not wise to mandate a technology
that at this time is easy to bypass
that the same time has detrimental effects
while not being known to achieve the desired target
of protecting children from mental health concerns,
predators, and so on.
The letter proposes conducting in-depth studies
of the benefits and harms of using age assessment technologies
before those technologies see widespread deployment.
The sites are dangerous for children to use.
That seems more like a site problem
than a children problem. Right?
And of course, we see with things like
AI psychosis that's been getting some press recently
that these sites are necessarily safe for adults either.
Right? So fixing the sites,
whether by regulation or other means
so that they're not harmful to children or adults,
seems like a better plan than these age estimation technologies.
The letter explicitly proposes tackling the root of the harm.
They cite the Council of Europe's Commissioner for Human Rights
and, for example, they propose looking
at the social media site algorithms
that prioritize engagement,
even if such engagement stems from promoting harmful content.
But getting politicians and technologists
to understand each other is, of course,
a long-standing problem, as you all know. Right?
And there are problems stemming from all sides.
So as you say, the politicians want tech to be magic
and to just do what they want.
The technologists often uphold their technologies,
especially the ones selling something unrealistically
as solutions to social problems.
To get the contracts,
to get the technologies they sell mandated by law
or for whatever reason.
Technologies are great tools,
and they can sometimes do surprising things.
But the technology cycle is many years long,
mandating a technology that doesn't currently exist
in an effective and non-detrimental form doesn't make sense.
Technology improves.
I don't actually think the term you use nerd harder
is wrong on its face. Right?
We do nerd harder as technologists,
and we do improve technology.
But the improvements are not usually linear
or easily predictable.
The politicians and lawmakers have to wait
for the nerd hardering to be done. Right?
And to demonstrably produce effective
and non-detrimental outcomes
before they can treat those new technologies
as something that actually exists and is usable and useful
and is available as one of many options
in a regulatory landscape.
That's just so well said.
Unfortunately, it's the sort of thing that hasn't been raised,
at least in the way that has convinced some of the senators
that they're simply looking for a solution.
And as you suggest,
are ahead at a sense of where the technology is
thinking that this will just sort itself out.
But I think you've done a really nice job of highlighting why.
That's just not likely to be the case,
because quite clearly there are real risks
and those risks are right now if we legislate right now.
So, Ian, thank you so much for the work that you do
for nerding harder, so to speak,
when it comes to privacy and handsick technologies
and for joining me here on the podcast.
Thanks for having me.
That's the Lobbytes Podcast for this week.
If you have comments, suggestions, or other feedback,
write to lobbytes at pobox.com.
That's L-A-W-B-Y-T-E-S at pobox.com.
Follow the podcast on Twitter at LobbytesPod
or follow Michael Geist, that M-Gaist.
You can download the latest episodes
from my website at MichaelGaist.ca
or subscribe at iTunes, Google, or Spotify.
The Lobbytes Podcast is produced by Herardo LeBron LeBoi.
Music by the LeBoi Brothers, Herardo and Jose LeBron LeBoi.
Credit information for clips featured in this podcast
can be found in the show notes at MichaelGaist.ca.
I'm Michael Geist.
Thanks for listening and see you next time.
