Microsoft announced Tuesday that it has applied “significant improvements” to its AI-powered facial recognition technology to better identify subjects with darker skin tones.
In a blog post, Microsoft revealed that the latest update mainly focused on improving the software’s accuracy in determining the gender of dark-skinned subjects.
After sets of testing and AI bot training, Microsoft said it managed to reduce its facial recognition tech’s error rates by 20 times across male and female subjects with darker skin tone. Meanwhile, the error rates were also diminished by nine times for women across all skin tones.
“With these improvements, they were able to significantly reduce accuracy differences across the demographics,” Microsoft’s blog post reads.
The company also admitted that "commercially available” face detect programs tend to have a bias and are more accurate in identifying the gender of people with lighter skin complexion. Microsoft obviously took notes from a study conducted by the MIT Media Lab headed by researcher Joy Buolamwini. Their experiment — which tested programs from Microsoft, IBM, and China’s Megvii — showed the greater inaccuracies in facial recognition results for dark-skinned subjects, especially females.
Upon tackling the program’s racial bias, Microsoft recognized that “artificial intelligence technologies are only as good as the data used to train them.” This means that it needed to expand the dataset used to train AI bots for the facial recognition software.
“The training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewelry, and eyewear,” Microsoft added.
Aside from having more comprehensive benchmark datasets, Microsoft researchers also reportedly enlisted help from “experts on bias and fairness” to improve the software they call gender classifier.
Upon realizing and mitigating the known shortcomings in the development of AI face detect software, Microsoft senior researcher Hanna Wallach said, “If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.”


NASA's Artemis II Crew Arrives in Florida for Historic Moon Mission
SpaceX Eyes Historic IPO at $1.75 Trillion Valuation
Australia's Social Media Ban for Under-16s Sparks Global Movement
Golden Dome Missile Defense: Anduril and Palantir Join Forces on Trump's $185B Space Shield
Google's TurboQuant Algorithm Sends Memory Chip Stocks Tumbling
SK Hynix Eyes Up to $14 Billion U.S. IPO to Fund AI Chip Expansion
SMIC Allegedly Supplies Chipmaking Tools to Iran's Military, U.S. Officials Warn
Annie Altman Amends Sexual Abuse Lawsuit Against OpenAI CEO Sam Altman
Microsoft Eyes $7B Texas Energy Deal to Power AI Data Centers
Microsoft's $10 Billion Japan Investment: AI Infrastructure and Data Sovereignty Push
Cybersecurity Stocks Tumble After Anthropic's Claude Mythos AI Leak Sparks Market Fears
NASA Artemis II: First Crewed Moon Mission Since Apollo Takes Four Astronauts on 10-Day Lunar Journey
California's AI Executive Order Pushes Responsible Tech Use in State Contracts
AWS Bahrain Region Disrupted by Drone Activity Amid Middle East Conflict
Elon Musk Ties SpaceX IPO Access to Mandatory Grok AI Subscriptions 



