|Saturday, September 21, 2019
You are here: Home » Special Reports » Omni Surveillance: Artificial Intelligence stores consumer data to manipulate their choices

Omni Surveillance: Artificial Intelligence stores consumer data to manipulate their choices 


Omni Surveillance

Companies turn to neurologists to get our attention on the Internet.

The commercial giant Wal-Mart, the largest retailer in the world, has a patent that discovers the mood of its customers simply by studying their faces.

In this way, it aims to locate disgruntled buyers to present them with special attention.

The Australian bank Westpac has a similar system, although it is focused on its staff so that the chiefs intervene if an employee requires it.

There may be consumers and employees who value these technologies favorably, because, thanks to them, their problems will supposedly be resolved beforehand.

However, experts like Dr. Monique Mann, of the University of Technology of Queensland, warns that, even in these cases, an “old consent concept” would be needed.

She is emphatic in stating: “The law and regulatory frameworks have been left behind with respect to technological advances. This brings serious inconveniences for privacy. “

In this context, his colleagues Katina Michael and M. G. Michael coined the term “omnivigilance”, with the following definition: “systems of generalized surveillance, with technology-enabled and integrated into society, electronic devices, and even the human body”.

The US Congress is discussing how technology intervenes in shops. Companies have digital innovations that allow them to better guide their activity.

However, thousands of individuals, associations and political parties consider that these formulas pose a threat to privacy.

They refer to shop windows that record the citizens who stand before them; to mirrors endowed with artificial intelligence to advise buyers about the clothes being tested; to cameras scattered everywhere that are fixed in the faces, the bodies and the bags of the clients to classify them …

On these occasions, it is not about giving away personal data in a more or less voluntary way, as is the case during many online purchases, but that those affected may feel violated, since they are spied on, studied and manipulated without asking for their permission or communicate what is being done with them.

In addition, in case there is not enough with the presence of consumers to implement these tracking methods, smartphones contribute to facilitating this identification: when individuals connect to WiFi, when activating their Bluetooth, etc.

The physical stores of Amazon, the perfume chain Sephora … use some of these mechanisms.

The examples increase day by day. Thus, information panels such as those installed in the new international financial center in Seoul also serve to monitor customers and analyze their movements.

Apparently, the main task of these machines is to provide help to those who need it, but the managers of this establishment use them to know at all times what visitors do.

And why do they do it?

Retailers in the UAE are also making rapid progress in this line. Many of them use devices of this type to count and identify people.

“The most requested programs are ahead of the direction that consumers will follow. For this reason, a great businessman has just bought 250 of these systems,” reveals the director for the Middle East and Africa of the multinational company Milestone Systems, Peter Biltsted.

The movement will not stop, as pointed out by another authoritative voice, Marwan Khoury, marketing manager of another specialized firm, Axis Communications. He remembers that in Japan deep learning has already adapted advertising that is displayed on a road depending on the type of vehicle that is circulating in front of it.

The market for the latest technology for sales will amount to 1,5 billion euros in 2020, according to the calculations by the Deloitte consultancy.

The same developments that exploit biometric details to ensure security – in the prevention of attacks, customs control, etc. – are being applied to trade.

However, if the first of these uses has triggered a debate of an ethical nature, how could the second one not motivate it?

A former head of the British Border Force, Tony Smith, stressed in a recent forum of the public broadcaster BBC that governments should legislate to avoid inappropriate practices with this data.

Like many others, he worries that an itinerary like the one described below is already a reality.

On the way to the department store, a driver enters a service station to refuel. While filling the fuel tank, look at the ads that appear on the dispenser screen.

At that moment, the artificial intelligence system hidden by this monitor is cataloging: age, sex … Do you wear glasses? Beard, maybe?

These factors help the robot to assign a demographic profile, which will be transmitted to the advertisers and will accompany him to the stores, and even to his house, without him knowing it. The citizen will think, simply, that he has seen some advertisements at the gas station.

They develop increasingly complex techniques to capture an audience that simultaneously receives very varied stimuli.

When a person surfs the internet, he or she unknowingly provides a lot of information. And we are not talking now about the data that it gives to companies and institutions to obtain certain products or services.

When placed in front of the screen, his eyes move from one element to another; his forehead wrinkles; the lips draw a smile; the face reddens with anger … These are physical signals that are derived from the contents offered by pages or applications.

These samples are so useful -commercial, institutional, etc.- that there are specialized consultants to capture, order and even anticipate these reactions.

The discipline responsible for understanding this behavior, neuroscience, has advanced to the point that professionals have sensors that track users’ eyes, facial expressions, skin cells, and even brain waves.

The reports with recommendations written by this staff pursue a clear objective: to capture the attention of the audience.

Web analytics, which allows you to find out what sites a person visits online and what they do there, complements this activity. Companies like Facebook know netizens in detail.

Facebook and many other companies trust neuromarketing because this set of techniques presents biometric data that is more solid and reliable than the information that emerges from surveys and other traditional methods.

The interviewees can lie about their tastes when asked about it, but their brains will not.

From the point of view of the clients, the most sophisticated equipment is very expensive, as one of the founders of the firm Neural Sense, Mark Drummond admits.

For this reason, the majority is content with “eye tracking”, the least expensive option. Often, this solution is combined with the analysis of facial expression.

The variety of individuals helps achieve better results. However, as Drummond points out, there is no evidence that the race or sex of the participants influences this data.

Despite how modern this system may seem, its origins go back to the 1950s.

Researchers of this time already adhered sensors to the skin of women and men who studied for the same purpose: understand the physiological responses to marketing actions and draw conclusions based on them.

Organizations focused on eye tracking such as Tobii are carrying out their experiments for titans like Google or Facebook even in people’s homes.

In addition to conducting tests in conventional groups, their managers deliver special glasses to volunteers to put them at home and act normally.

Thus, the so-called “shared attention” is deepened, that is, what Internet users do when, for example, they are watching a series on Netflix and taking a look at Instagram at the same time.

The use of smartphones as a tool for universal use and the application of virtual and augmented reality opens the way for this task in the future.

Putting a stop to illegal facial recognition

Law and technology tend to go badly wrong, but this time a city seems to want to curb bad technological practices based on facial recognition.

Although, of course, it is not about any city. It is about San Francisco, close to Silicon Valley and one of the nerve centers where the ideas of technology are born worldwide.

In contrast, the European Union has approved the creation of a database with fingerprints and facial recognition of people entering the Schengen area.

Something that seems very practical to avoid terrorist threats or other criminal organizations, but as the law approved in San Francisco says the risks of this technology are likely to outweigh the benefits.

The San Francisco Board of Supervisors, a legal figure similar to that of the City Council, has voted this week the so-called ordinance to stop secret surveillance.

This ordinance aims to limit the indiscriminate use of biometric data in the city. Among them are facial recognition systems. A legislative milestone at an almost global level, because there is a lack of precise legislation on technology and privacy.

The city practically bans the use of facial recognition technology by the local government with this ordinance. The ordinance makes the risks of surveillance technologies quite clear in its text:

“While surveillance technology can threaten the privacy of all of us, vigilance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective (…) The propensity for facial recognition technology to endanger civil rights and civil liberties substantially exceeds its supposed benefits “.

This text may one day become historical because of the precision with which it defines the threat posed by surveillance systems linked to technologies such as facial recognition.

We must not forget that in China, facial recognition technology is being used in some regions to monitor alleged dissidents of the regime. Or that is being used in some commercial areas to analyze the behavior of consumers. But in reality, all this is the tip of a huge iceberg.

A 2016 study conducted by Georgetown University found that most American adults appear in police photo databases.

How could this have happened? Simply because the global legislation on technological privacy has huge gaps.

Although we must not forget that the political interests in the use of these surveillance technologies, let’s say excessive, join the business interests.

Amazon has sold its facial recognition technology to a large number of companies, but also to the police forces in the United States. Something that, by the way, does not satisfy many of its employees and investors.

Many people like you read and support The Real Agenda News’ independent, journalism than ever before. Different from other news organisations, we keep our journalism accessible to all.

The Real Agenda News is independent. Our journalism is free from commercial, religious or political bias. No one edits our editor. No one steers our opinion. Editorial independence is what makes our journalism different at a time when factual, honest reporting is lacking elsewhere.

In exchange for this, we simply ask that you read, like and share all articles. This support enables us to keep working as we do.

About the author: Luis R. Miranda

Luis Miranda is an award-winning journalist and the Founder and Editor of The Real Agenda News. His career spans over 20 years and almost every form of news media. He writes about environmentalism, geopolitics, globalisation, health, corporate control of government, immigration and banking cartels. Luis has worked as a news reporter, On-air personality for Live news programs, script writer, producer and co-producer on broadcast news.

Add a Comment