In a statement on Friday, the Metropolitan police announced the decision to deploy facial recognition system across London that will help identify suspects, catch serious criminals, and track down missing people. The Police Department hopes that the AI-powered technology will help “tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and help protect the vulnerable.”


The facial recognition technology will be deployed in places where data indicates people responsible for serious crimes. At the same time, the cameras will be targeted on small areas. The new system will be rolled out within a month and is made by NEC, a Japanese IT and electronics company.

The facial recognition system can identify people by measuring their facial features. At first, the software will examine the geometry of a face captured by cameras, and then create a biometric map of the face, a so-called “faceprint.” In the next step, the system will compare the faceprint with those on a watchlist. A watchlist can be created for different locations and purposes. Once a person is identified, the system will alert police officers stationed nearby the cameras. A human operator will then verify the matches made by a computer. If a person detected is not on the database, the system will delete the information immediately. The cameras will only be used for 5-6 hours a day.

The police will create “bespoke” watchlists for those people they wish to identify. They will be “predominantly those wanted for serious and violent offences.” The force also mentions that cameras will be “clearly signposted,” and officers will be “deployed to the operation will hand out leaflets about the activity.” The statement says, “At a deployment, cameras will be focused on a small, targeted area to scan passers-by.”

Privacy activists, politicians, data regulators, and human rights groups have raised serious concerns in relation to the facial recognition system, and some even called it “an invasion of privacy” and “a serious threat to civil liberties.” However, the Met said it would only begin using this technology after consulting communities where it will be used.

Assistant Commissioner Nick Ephgrave said in a statement: “We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance.”

Face-matching technology has been tested since 2016. However, one of those trials indicated that 81% of matches made by the system were incorrect. Therefore, implementing new technology is being criticized by many. Nevertheless, the police department says the technology is “tried and tested.”

Nick Ephgrave also stated: “As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London. We are using a tried-and-tested technology. Similar technology is already widely used across the UK, in the private sector.”