Surveillance systems with biometric data have been extended in recent decades with the aim of “improving our security”, but different organizations have begun to question and even prohibit their use. The line that separates security from citizen control is thinner than ever before.

Emptying or throwing away our water bottles before passing air control may now seem a common measure every time we travel by plane. But 19 years ago, stricter supervision systems began to be put in place before boarding, measures that were prompted by the events of September 11, 2001.

This date marked a turning point in terms of politics, terrorism and the wars that followed, but it was also the beginning of a series of measures that we were accepted in favor of our “security”.

As a result of these events that horrified the world, air travel was changed as subtle provisions were permeating our society.

After the post-traumatic shock of the attack, many people began to themselves, little by little, of the need to give away their privacy to the security forces, government agencies and companies of various kinds that offered to protect us, especially in the United States, where psychosis was more evident.

Old and new atabases were fed with facial identities, but citizens did not think of asking for anything in return, not even permission. Everything was done in the name of “safety”.

A decade later, when the technology used for facial recognition seemed mature enough, the first facial recognition systems began to be installed in airports.

The first to release it was the airport in Tocumen, Panama, which had a certain reputation as a transit area for smugglers and organized crime.

In 2011, the government partnered with a pilot program called from an American company called FaceFirst in order to prevent illegal trafficking. The success was such that they expanded it to other terminals.

Currently, all Canadian international airports use a facial recognition system. Australia and New Zealand use a border system called SmartGate, which automatically compares the traveler’s face with the information on their electronic passport.

Since 2018, the US Customs and Border Protection has implemented the same system for passengers taking international flights.

But the most cutting-edge and controversial country, which has by far covered much of its territory with facial recognition equipment, is China.

It has not only installed these biometric devices in train stations and airports, but also in office buildings, tourist attractions, shopping centers and entrances to mosques.

In some cities, you can find surveillance cameras installed every hundred meters or so. Police carry these systems built into glasses and helmets and they use them for a variety of situations, from checking a suitcase to making payments.

A law obliges phone companies to record the facial biometric parameters of the users on any new mobile phone, and the development of facial identification technology with masks has even been promoted through subsidies.

Governments and security agencies around the world already use these recognition methods to identify criminals, corpses in forensic medicine, search for missing children or prevent document fraud.

In Europe, people are still quite far from the figures related to surveillance carried out by devices, when compared to China, although the United Kingdom stands out from the continent in the number of cameras installed in public and private places. It is estimated that there are more than four million.

In London alone there would be about 500,000 cameras, compared to 25,000 in Paris, or 219 in Madrid.

Facial recognition methods dedicated to mass surveillance had to wait until the second decade of the 20th century, in part because early attempts showed that the results were far from being as desired.

The first time it was used in a major event was in the 2002 Super Bowl, when it was a failure. Numerous false positives showed that the technology was not yet ready for surveilling large crowds.

Starting in 2015, UK police forces began testing it for public live events as well, but a report by Big Brother Watch found that these methods continued to return up to 98% inaccurate results.

One of the obstacles to working properly in large crowds was the difficulty in obtaining quality images. INTERPOL’s Facial Recognition System (IFRS) illegally stores facial images submitted by more than 160 countries, making it a unique database in law enforcement.

This system, launched at the end of 2016, has managed to identify more than 650 criminals, fugitives and missing persons. But its website already warns that the quality of the images is an essential aspect and that those that only have a medium or low resolution may not achieve the main goal or negatively influence the accuracy of the search.

This page specifies that “the ideal would be to have a passport photograph in accordance with the ICAO standard since it is a complete frontal image of the person with homogeneous lighting on the face and a neutral background.

But outside the field of security, we already saw how artificial intelligence applied to this field appeared in our social networks as something innocent and original back in 2010, when Facebook incorporated it to recognize the faces of our friends in the photographs that we uploaded and the labels we provided. Their use spread rapidly and today they are found in many of the smartphones and applications that we use every day.

The controversies about the use of biometrics have arisen mainly when it was discovered that some companies and organizations used the information collected for a purpose other than that authorized. This issue has been at the center of the debate on ethics and privacy since 2001, fueled in turn by a legal vacuum in the application of new information technologies.

In recent months, controversies over its use have increased, driven in part by the Black Lives Matter movement, in response to which different organizations have begun to back down accusing excessive control and possible promotion of racism and social injustice. Amazon and IBM took a step back.

Last year, San Francisco had already become the first major city in the United States to prohibit all local agencies, including the police, from using facial recognition techniques. The development of artificial intelligence has gone faster than its own legislation or the consensus of an ethical application.

Leave a Reply

Your email address will not be published. Required fields are marked *