Microsoft announced Tuesday that it has applied “significant improvements” to its AI-powered facial recognition technology to better identify subjects with darker skin tones.
In a blog post, Microsoft revealed that the latest update mainly focused on improving the software’s accuracy in determining the gender of dark-skinned subjects.
After sets of testing and AI bot training, Microsoft said it managed to reduce its facial recognition tech’s error rates by 20 times across male and female subjects with darker skin tone. Meanwhile, the error rates were also diminished by nine times for women across all skin tones.
“With these improvements, they were able to significantly reduce accuracy differences across the demographics,” Microsoft’s blog post reads.
The company also admitted that "commercially available” face detect programs tend to have a bias and are more accurate in identifying the gender of people with lighter skin complexion. Microsoft obviously took notes from a study conducted by the MIT Media Lab headed by researcher Joy Buolamwini. Their experiment — which tested programs from Microsoft, IBM, and China’s Megvii — showed the greater inaccuracies in facial recognition results for dark-skinned subjects, especially females.
Upon tackling the program’s racial bias, Microsoft recognized that “artificial intelligence technologies are only as good as the data used to train them.” This means that it needed to expand the dataset used to train AI bots for the facial recognition software.
“The training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewelry, and eyewear,” Microsoft added.
Aside from having more comprehensive benchmark datasets, Microsoft researchers also reportedly enlisted help from “experts on bias and fairness” to improve the software they call gender classifier.
Upon realizing and mitigating the known shortcomings in the development of AI face detect software, Microsoft senior researcher Hanna Wallach said, “If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.”


Apple Opens iPhone to Alternative App Stores in Japan Under New Competition Law
OpenAI Explores Massive Funding Round at $750 Billion Valuation
Evercore Reaffirms Alphabet’s Search Dominance as AI Competition Intensifies
SpaceX Insider Share Sale Values Company Near $800 Billion Amid IPO Speculation
Intel’s Testing of China-Linked Chipmaking Tools Raises U.S. National Security Concerns
SUPERFORTUNE Launches AI-Powered Mobile App, Expanding Beyond Web3 Into $392 Billion Metaphysics Market
Oracle Stock Slides After Blue Owl Exit Report, Company Says Michigan Data Center Talks Remain on Track
MetaX IPO Soars as China’s AI Chip Stocks Ignite Investor Frenzy
Trump Administration Reviews Nvidia H200 Chip Sales to China, Marking Major Shift in U.S. AI Export Policy
Jared Isaacman Confirmed as NASA Administrator, Becomes 15th Leader of U.S. Space Agency
Oracle Stock Surges After Hours on TikTok Deal Optimism and OpenAI Fundraising Buzz
Mizuho Raises Broadcom Price Target to $450 on Surging AI Chip Demand
Australia’s Under-16 Social Media Ban Sparks Global Debate and Early Challenges
SoftBank Shares Slide as Oracle’s AI Spending Plans Fuel Market Jitters
noyb Files GDPR Complaints Against TikTok, Grindr, and AppsFlyer Over Alleged Illegal Data Tracking.
Republicans Raise National Security Concerns Over Intel’s Testing of China-Linked Chipmaking Tools
Apple Explores India for iPhone Chip Assembly as Manufacturing Push Accelerates 



