Belén Laspra
Assistant Professor in the Department of Philosophy at the University of Oviedo
The research carried out by researchers at Bocconi University (Milan), the University of St. Gallen (Switzerland), the Paris School of Economics, and the École des Hautes Études en Sciences Sociales (Paris) constitutes one of the most solid pieces of experimental evidence to date on the relationship between recommendation algorithms and political attitudes on social media.
In the current context, marked by intense debates on disinformation, polarization, and algorithmic governance, this type of work is particularly necessary. This is not an observational analysis or a simple correlation between digital consumption and political preferences, but rather a study of nearly 5,000 active US users of X in 2023. From the point of view of internal validity, the work is rigorous and technically well-founded. They did not limit themselves to asking what participants thought, but examined what they saw and how they acted; in addition to measuring stated attitudes, the researchers made a remarkable methodological effort to analyze the content actually displayed by the algorithm and changes in users' actual behavior, such as the accounts they began to follow.
So what exactly does it prove? The study compares two configurations of the X feed: chronological mode (“Following”), which displays posts in chronological order from accounts already followed; and algorithmic mode (“For You”), which selects, reorders, and adds content based on internal relevance criteria. Activating the algorithmic feed for seven weeks produced statistically significant shifts in certain political attitudes toward more conservative positions, especially in public policy priorities, in the assessment of judicial investigations against Donald Trump, and in opinions regarding the war in Ukraine. However, it did not change partisan identification or affective polarization; that is, it did not change users' affiliation with Democrats or Republicans, nor did it increase the degree of sympathy or emotional rejection toward the other party.
While the algorithm did not transform established political identities or “convert” users into voters for another party, it does seem to have influenced specific positions within existing ideological frameworks. This suggests that the algorithm can modulate how certain issues are prioritized or how certain events are interpreted, but without altering underlying political identity. Therefore, for the moment, we are talking about specific attitudinal shifts, not massive reconfigurations of belief systems.The study introduces a breakthrough in relation to the evidence accumulated so far. Previous research had found that deactivating algorithms did not produce any noticeable political effects. The provisional conclusion was that algorithms did not seem to have a measurable direct political impact. This new work introduces a different thesis: that turning off the algorithm does not produce changes does not imply that the algorithm has not had an impact previously. According to the authors, initial exposure can influence who we decide to follow, thus persistently reconfiguring the user's information environment. The algorithm not only orders information; it can help shape the structure of the future exposure network. This persistence hypothesis is probably its most significant contribution.
To assess the public importance of the implications of this study, it is also necessary to gauge the scope of the platform. According to the Pew Research Center (2025), approximately 21% of adults in the United States claim to use X. This figure, which should be read with caution given the difficulty of accurately estimating the actual number of active users in an environment with fake accounts, intermittent inactivity, and continuous entries and exits, is equivalent in absolute terms to between 54 and 55 million Americans. When expressed in raw figures, the scale becomes more visible: we are talking about tens of millions of citizens in a democracy of 330 million inhabitants. Furthermore, the political influence of X is greater than that percentage suggests. Journalists, political leaders, and institutional officials are very present on the platform, transferring what happens there to traditional media, parliamentary debates, and other networks. As a result, content circulating within X can end up setting the general public agenda, even among people who do not directly use the platform. This amplification effect explains why a network with lower penetration than others can have considerable political weight.
In this context, the results of the study take on a broader dimension. If the algorithm can influence, even incrementally, how certain issues are prioritized or how certain events are interpreted within the platform, those small shifts are not necessarily confined to the digital environment. To the extent that X functions as a central node for the production and circulation of political discourse, algorithmic dynamics can indirectly influence the general media agenda. The persistence hypothesis does not point to a massive transformation of public opinion, but it does suggest that the architecture of the system can intervene in a space that has effects beyond its direct users.
Could the same thing happen in other countries? The answer requires caution. The data corresponds to the United States in 2023 and to a specific political context. The study shows that, in that specific period, the algorithm gave relatively higher priority to conservative content. In other political systems, with different ideological configurations, the direction of the effect could vary. There is insufficient empirical basis to automatically generalize the result to other countries. Precisely for this reason, a niche for comparative research opens up here to assess the extent to which these effects depend on technical design, political context, or other elements.
Therefore, certain limitations of the study, pointed out by the authors themselves, must be taken into account. Although the experimental design is robust, the existence of unobserved factors, such as personality traits or stronger ideological predispositions that influence both the initial choice of feed type and the response to treatment, cannot be completely ruled out. In addition, the sample is composed of active users; the effects could be smaller among occasional users, who receive a lower intensity of exposure.
In short, the research is rigorous and provides relevant evidence. It does not demonstrate massive manipulation or strong technological determinism, but it does suggest something more structural: that algorithmic architecture can have an incremental influence on the formation of specific political attitudes, especially when it intervenes in the selection of the sources that make up the user's information environment. Its importance lies in the finding that the design of digital infrastructures is part of the structural conditions in which political opinions are formed. When these infrastructures potentially reach more than 50 million citizens in a single democracy, the issue ceases to be merely technical and becomes a matter of public relevance.