- Oops!Something went wrong.Please try again later.
The tech giant cited ethical concerns surrounding the facial recognition technology, which it claimed could subject people to “stereotyping, discrimination, or unfair denial of services”.
In a blog post published on Tuesday, Microsoft outlined the measures it would take to ensure its Face API is developed and used responsibly.
“To mitigate these riskes, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup,” wrote Sarah Bird, a product manager at Microsoft’s Azure AI.
“Detection of these attributes will no longer be available to new customers beginning 21 June, 2022, and existing customers have until 30 June, 2023, to discontinue use of these attributes before they are retired.”
Microsoft’s Face API was used by companies like Uber to verify that the driver using the app matches the account on file, however unionised drivers in the UK called for it to be removed after it failed to recognise legitimate drivers.
The technology also raised fears about potential misuse in other settings, such as firms using it to monitor applicants during job interviews.
Despite retiring the product for customers, Microsoft will continue to use the controversial technology within at least one of its products. An app for people with visual impairments called Seeing AI will still make use of the machine vision capabilities.
Microsoft also announced that it would be making updates to its ‘Responsible AI Standard’ – an internal playbook that guides its development of AI products – in order to mitigate the “socio-technical risks” posed by the technology.
It involved consultations with researchers, engineers, policy experts and anthropologists to help understand which safeguards can help prevent discrimination.
“We recognize that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve,” wrote Natasha Crampton, Microsoft’s chief responsible AI officer, in a separate blog post.
“We believe that industry, academia, civil society, and government need to collaborate to advance the state-of-the-art and learn from one another... Better, more equitable futures will require new guardrails for AI.”