We began a new thread related to ethics in AI with papers focus on papers by Buolamwini and Gebru.
AI is now used routinely to make decisions that once were made by people in areas ranging from hiring to policing to social matchmaking. It seems fair to scrutinize these applications for ethics and fairness. Particularly sensitive applications involve biometrics and facial recognition. In the US, recent examples of federal bills and proposed legislation include Algorithmic Accountability Act, Commercial Facial Recognition Privacy Act of 2019 and No Biometric Barriers Act. All of these in some way or other propose auditing procedures for face processing technologies. Already, there have been large public audits by scholars and organizations which have affected providers of face processing APIs such as Microsoft, IBM, Amazon and smaller enterprises.
We studied two prominent example of such audits described in two papers by Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” and the same authors with new collaborators in “Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.”