Menu

Search

  |   Technology

Menu

  |   Technology

Search

Microsoft Updates Facial Recognition Tech to Improve Results for Subjects with Darker Skin

Building 92 at Microsoft Corporation headquarters in Redmond, Washington. Image credit: Coolcaesar (CC BY-SA 4.0) via Wikimedia Commons

Microsoft announced Tuesday that it has applied “significant improvements” to its AI-powered facial recognition technology to better identify subjects with darker skin tones.

In a blog post, Microsoft revealed that the latest update mainly focused on improving the software’s accuracy in determining the gender of dark-skinned subjects.

After sets of testing and AI bot training, Microsoft said it managed to reduce its facial recognition tech’s error rates by 20 times across male and female subjects with darker skin tone. Meanwhile, the error rates were also diminished by nine times for women across all skin tones.

“With these improvements, they were able to significantly reduce accuracy differences across the demographics,” Microsoft’s blog post reads.

The company also admitted that "commercially available” face detect programs tend to have a bias and are more accurate in identifying the gender of people with lighter skin complexion. Microsoft obviously took notes from a study conducted by the MIT Media Lab headed by researcher Joy Buolamwini. Their experiment — which tested programs from Microsoft, IBM, and China’s Megvii — showed the greater inaccuracies in facial recognition results for dark-skinned subjects, especially females.

Upon tackling the program’s racial bias, Microsoft recognized that “artificial intelligence technologies are only as good as the data used to train them.” This means that it needed to expand the dataset used to train AI bots for the facial recognition software.

“The training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewelry, and eyewear,” Microsoft added.

Aside from having more comprehensive benchmark datasets, Microsoft researchers also reportedly enlisted help from “experts on bias and fairness” to improve the software they call gender classifier.

Upon realizing and mitigating the known shortcomings in the development of AI face detect software, Microsoft senior researcher Hanna Wallach said, “If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.”

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.