首页> 外文会议>IEEE International Conference on Machine Learning and Applications >Understanding Fairness of Gender Classification Algorithms Across Gender-Race Groups
【24h】

Understanding Fairness of Gender Classification Algorithms Across Gender-Race Groups

机译:了解性别种族群体的性别分类算法公平

获取原文

摘要

Automated gender classification has important applications in many domains, such as demographic research, law enforcement, online advertising, as well as human-computer interaction. Recent research has questioned the fairness of this technology across gender and race. Specifically, the majority of the studies raised the concern of higher error rates of the face-based gender classification system for darker-skinned people like African-American and for women. However, to date, the majority of existing studies were limited to African-American and Caucasian only. The aim of this paper is to investigate the differential performance of the gender classification algorithms across gender-race groups. To this aim, we investigate the impact of (a) architectural differences in the deep learning algorithms and (b) training set imbalance, as a potential source of bias causing differential performance across gender and race. Experimental investigations are conducted on two latest large-scale publicly available facial attribute datasets, namely, UTKFace and FairFace. The experimental results suggested that the algorithms with architectural differences varied in performance with consistency towards specific gender-race groups. For instance, for all the algorithms used, Black females (Black race in general) always obtained the least accuracy rates. Middle Eastern males and Latino females obtained higher accuracy rates most of the time. Training set imbalance further widens the gap in the unequal accuracy rates across all gender-race groups. Further investigations using facial landmarks suggested that facial morphological differences due to the bone structure influenced by genetic and environmental factors could be the cause of the least performance of Black females and Black race, in general.
机译:自动性别分类在许多域中具有重要应用,例如人口调查,执法,在线广告以及人机互动。最近的研究质疑各种性别和种族的技术公平。具体而言,大多数研究提出了对肤色的基于人性分类系统更高的误差率的关注,以便像非洲裔美国人和女性那样更暗的皮肤。然而,迄今为止,大多数现有研究仅限于非洲裔美国和高加索人。本文的目的是调查性别分类算法跨性别群体群体的差异性能。为此目的,我们调查(a)架构差异在深度学习算法和(b)训练集失败中的影响,作为偏见的潜在来源,导致性别和种族的差异性能。实验调查是在两个最新的大型公共面部属性数据集,即utkface和fairface上进行。实验结果表明,具有架构差异的算法,性能变化,具有对特定性别群体群体的一致性。例如,对于所使用的所有算法,黑人女性(一般)总是获得最低的准确率。中东雄性和拉丁裔女性大部分时间都获得了更高的精度率。培训设定不平衡进一步扩大了所有性别竞赛群体的不平等准确性差距。使用面部地标的进一步调查表明,由于受遗传和环境因素影响的骨骼结构导致的面部形态差异可能是黑色女性和黑色种族性能最低的原因。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号