Tech giant Microsoft announced Tuesday that it would end the sale of its emotion-reading technology and begin limiting access to its proprietary facial-recognition software amid growing privacy concerns.
Sarah Bird, principal group product manager at Microsoft’s Azure AI unit, announced the change in a blog post, stating the decision was part of Microsoft’s efforts to ensure its AI technology is used more responsibly.
“We will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup,” Bird wrote. “Capabilities that predict sensitive attributes open up a wide range of ways they can be misused, including subjecting people to stereotyping, discrimination, or unfair denial of services.”
While API access to these attributes is no longer available to customers for general-purpose use, Microsoft said it recognizes these capabilities can be valuable when used for controlled accessibility scenarios.
Facial detection capabilities, including blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box, will remain generally available and do not require an application.
“We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs,” wrote Bird.
According to Microsoft’s chief responsible AI officer, Natasha Crampton, “Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ’emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.”
Lisa Feldman Barrett, a professor of psychology at Northeastern University, conducted a review into the subject of AI-powered emotion recognition. Barret found that AI-powered emotion recognition tools can detect scowls, but noted that is not the same as detecting anger.
The AI technology is no longer available to new customers; however, existing customers have until June 30, 2023, to apply and receive approval for continued access to facial recognition operations in Azure Face API, Computer Vision, and Video Indexer.
Privacy concerns over the use of facial- and emotion-based technology previously informed Microsoft’s creation of a “Responsible AI Standard,” which guides the company’s AI product development and deployment.
Microsoft said it has updated this set of guiding principles in accordance with the recent change.
“Our updated Responsible AI Standard reflects hundreds of inputs across Microsoft technologies, professions, and geographies,” wrote Crampton. “There is a rich and active global [dialogue] about how to create principled and actionable norms to ensure organizations develop and deploy AI responsibly.”
Microsoft says it remains committed to supporting technology for people with disabilities. It will continue to use these capabilities through integration into other applications. One such application is Seeing AI, which uses a device’s camera to identify people and objects before audibly describing them, intended to assist people with visual impairments.