Sign in

Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition

By Samuel Dooley and others at
LogoCaltech
and
LogoNew York University
Face recognition systems are widely deployed in safety-critical applications, including law enforcement, yet they exhibit bias across a range of socio-demographic dimensions, such as gender and race. Conventional wisdom dictates that model biases arise from biased training data. As a consequence, previous works on bias mitigation largely focused on pre-processing... Show more
December 6, 2023
=
0
Loading PDF…
Loading full text...
Similar articles
Loading recommendations...
=
0
x1
Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition
Click on play to start listening