UK Markets closed

Explained: The live facial recognition technology the Met Police is using in London

Explained: The live facial recognition technology the Met Police is using in London

The Metropolitan Police are using live facial recognition (LFR) technology in London today, (July 14).

The Met also deployed facial recognition technology in Oxford Circus last week, resulting in the arrests of three people.

The vehicle-mounted LFR system was set up outside the tube station and scanned around 15,600 people’s biometric data. Of these 15,600 people, four were “true alerts,” and three people were subsequently arrested.

The police said in a statement: “This technology helps keep Londoners safe, and will be used to find people who are wanted for violent and serious offences and those with outstanding arrest warrants issued by the court.

“Independent experts will also be carrying out a test on the LFR system to establish accuracy.”

But the use of LFR has long been controversial, and its use by UK police forces has previously been found to be unlawful.

What is live facial recognition and how does it work?

Live facial recognition technology works by using cameras to scan the faces of people in a specific area.

Their images are then streamed to a LFR system which contains a database of people the police are looking for.

This “watchlist” includes offenders, people wanted by the police or the courts, or people who pose a risk of harm to themselves or others.


Why is facial recognition controversial?

The use of facial recognition technology is considered to be controversial for a number of reasons.

One criticism is that thousands of people’s biometric data is captured without their consent. The images used to create the watchlists can also come from anywhere, taken without the individual’s consent.

Additionally, the police have been using LFR in public without parliamentary debate and the public has not been given an opportunity to vote on its use.

Furthermore, people have been misidentified, and subsequently wrongly stopped and searched, and the technology has also found to be discriminatory, with Black people more likely to be misidentified.

In an article for Harvard University, Alex Najibi wrote that “a growing body of research exposes divergent error rates across demographic groups, with the poorest accuracy consistently found in subjects who are female, Black, and 18-30 years old.”

The 2018 “Gender Shades” found that three algorithms performed the worst on darker-skinned females, with error rates up to 34% higher than for lighter-skinned males.

What have the courts and civil liberties groups said about facial recognition?

In August 2020, the Court of Appeal ruled that a UK police force’s use of facial recognition technology infringed on privacy and data protection laws. South Wales Police use of facial recognition was found to be discriminatory, after lawyers argued it interferes with privacy and data protection laws and is potentially discriminatory.

In June 2021, the Information Commissioner expressed concern about the potential misuse of live facial recognition in public places, and warned that “supercharged CCTV” could be used “inappropriately, excessively or even recklessly”.

In August 2021, a group of civil society bodies urged the Government to ban facial recognition cameras. They also accused the police Home Office of bypassing Parliament over guidance for the use of the technology.

“In a democratic society, it is imperative that intrusive technologies are subject to effective scrutiny,” the letter said.

In October 2021, the European Parliament called for a ban on police use of facial recognition technology in public places, as well as a ban on private facial recognition databases.