Experts warn of the risks of Big Tech continuing to use technology the same with more resources.
Let’s say you are asked to use an algorithm to determine which politician best suits your preferences.
The tool then asks you a few questions typical of a personality test, then offers you a result in which you are reasonably portrayed, and indicates which are the most appropriate candidates based on the information you have collected.
Would you follow its advice?
According to the experiments conducted by Helena Matute and Ujué Agudo, researchers at the University of Deusto, for the study they have just published, we would not only accept that recommendation, but we would also act on it without questioning the reliability of this system, the ins and outs of which we do not know, and that, in this case, doesn’t even exist.
The questions were sticky. The evaluation, a generic answer for all the participants – who, by the way, rated the system as “moderate or highly accurate” – and the algorithm calculations a randomisation.
The aim of the researchers, who applied a series of variations of this experiment to the choice of political candidates and contacts in a dating app, was to see if an algorithm can influence people’s preferences through explicit or covert persuasion.
In the context of dating applications, the most effective method was covert, where the user was not told which user is the recommended user, but was shown the poll more times.
As exposure increased, participants tended to choose the most viewed, driven by a greater sense of familiarity. Researchers explain that this difference could be related to the preference for human advice in subjective settings, such as who to date, while we opted for algorithmic advice in rational decisions such as who to vote for.
However, experts qualify the variation by contexts as something “anecdotal” compared to the importance of the mere fact that this influence exists: “I find it very worrying that just because you believe that it is an artificial intelligence who is recommending you what to do, you trust it. We are asking the oracle, as in ancient times ”.
Why is a fictitious algorithm enough for us to fall into the trap?
“There are people for whom these types of vague personality descriptions can tell how well the horoscope works. It is human mind, which is very vulnerable. We are very exposed to believe certain things ”, the authors reason.
As for how reasonable it is to us that an algorithm can, with a few questions, determine who we should vote for, both researchers agree that we are increasingly used to blindly accept all kinds of recommendations.
“We are repeatable beings, so algorithms know us better than ourselves that in the end we are going to believe it. But it is one thing that they know us and another that what they recommend us is the best for us.”
“Their goal is not that. It is that you spend as much time as possible on their platforms. The user has to take that into account, because it seems to us that the decision is free, but it is mediated by the recommendation itself ”.
In addition, the effectiveness of the fictitious algorithm suggests that a more sophisticated system such as those with which we interact every day in search engines, social networks and streaming platforms, among others, have the potential to achieve much higher levels of influence. An example is the scope that Facebook’s algorithm could have in its new couples app.
“We have done some controlled experiments. The large platforms that can, continuously, polish this algorithm and many times we would not even know that they are doing it, ”says Agudo.
These inequalities between the research that an academic institution can carry out and those that are conducted privately and often opaquely within these companies go beyond the number of subjects and even the technologies available.
“They can do much more research than us, not only because they have access to more people, but also because they go beyond ethics,” Matute emphasizes. “We have had to do everything with fictitious things. They do it with real politicians in real elections ”.
The researcher is especially critical of the laboratories and ethics boards that companies like Google incorporate.
Above all, when there is someone who is serious about it, they get him out of the way, even though one should not leave the ethical debate in the hands of technology companies.
In this sense, researchers trust that their work, which also entails the publication of the raw data collected, will contribute to expanding academic and publicly accessible research on these matters.
“From our point of view everything is opening up more and more. Anyone can analyze experiments, replicate them, evaluate hypotheses … Now you want to check if the Netflix algorithm is giving more importance to one or the other variables and you have no way of knowing,” says Agudo.
Regarding the way in which these already ubiquitous recommendations are presented, Matute points out that those that are explicitly presented are preferable, since at least they give the user the possibility of not being secretly influenced.