Study shows X's (Twitter) algorithm moves users towards more conservative political positions

On social network X (formerly Twitter), when users select the ‘for you’ option, the algorithm tends to steer them towards more conservative political options, according to research conducted with nearly 5,000 participants in the United States in 2023. The authors randomly assigned users to either an algorithmic or chronological feed for seven weeks. The switch from chronological to algorithmic feed increased engagement and shifted political opinion towards more conservative positions, especially on political priorities, perceptions of criminal investigations into Donald Trump, and opinions on the war in Ukraine. Conversely, switching from the algorithmic feed to the chronological feed had no comparable effects. ‘Initial exposure to X's algorithm has persistent effects on users' current political attitudes,’ say the authors of the study, published in Nature.

18/02/2026 - 17:00 CET
Expert reactions

Ramón Salaverría - X algoritmo EN

Ramón Salaverría

Professor of Journalism at the University of Navarra and coordinator of Iberifier (Iberian Digital Media Observatory)

Science Media Centre Spain

Is the research of good quality?

"It is an experimental study with a sample of nearly 5,000 X users in the United States, conducted over a seven-week period in the summer of 2023. This period corresponds to six months after Elon Musk's purchase of Twitter and one year before he publicly endorsed the then-Republican candidate for the US presidency, Donald Trump. Both in terms of sample size and data collection and analysis procedures, this is rigorous research, as expected from a scientific journal of the scientific quality of Nature. It should also be noted that the research team conducted the experiment on its own and without the collaboration of X, which reinforces the independence of the results."

How does it fit with existing evidence?

"Since networks such as Facebook and Twitter appeared in the first decade of the 2000s, theories such as the echo chamber and the filter bubble have suggested that social networks act as selective filters for certain opinions. According to these theories, through their secret algorithms, networks are acting as gatekeepers of information, increasing the visibility of certain content and reducing others. In their quest to maximize user loyalty, social networks have become inadvertent filters of the information their users consume.

Over the last two decades, several studies have been conducted to measure these filtering and selective content reinforcement effects by social networks as a whole. What is unique about this study published in Nature is that it analyzes the effects of X on the political positioning of its users, analyzing respectively the “For You” mode of X, which presents posts according to an algorithmic selection determined by the social network itself, and the “Following” mode, where each user decides which accounts to view and receives posts in chronological order. The study has found that, when used repeatedly, the “For You” mode, the algorithmic selection, encourages X users to shift toward more conservative political positions.

However, the effect does not affect everyone in the same way, but varies according to their starting position on the ideological spectrum. Those who at the beginning of the experiment defined themselves as progressive (“liberal,” according to the study's terminology) experienced a relatively limited shift toward conservative positions; on the other hand, those who initially claimed to align with conservative or independent positions evolved toward even more conservative positions".

The data is on US users. Could it be that in other countries the algorithm also steers users toward more conservative views?

"It could be. However, at the moment it is only a hypothesis. Further studies would be needed to verify whether the specific configuration of X in each country and in each language translates into a similar phenomenon of ideological shift toward conservative positions as that detected in the United States. It should also be borne in mind that, beyond X, the political particularities of each country may contribute to enhancing or, conversely, moderating this effect. For now, what this study allows us to affirm is that this effect has been verified in the United States."

Are there any important limitations to consider?

"The authors of the study highlight two main limitations: first, the fact that the results must be limited to the X network and, second, that the time period in which the experiment was conducted has a significant impact.

Regarding the first limitation, we should not exaggerate by saying that all social networks influence the political opinions of their users in the same way as X. In fact, the authors of the study point out that the preferences of the owners of each platform may cause each social network to influence its users differently.

As for the time limitation, the study was conducted over a period of seven weeks in the summer of 2023. Although this is a relatively long period of time, it does not allow us to determine the effects over longer periods, such as years of exposure to social media".

The author has not responded to our request to declare conflicts of interest
EN

Celia Díaz Catalán - X algoritmo EN

Celia Díaz Catalán

Researcher at the TRANSOC Institute of the Faculty of Political Science and Sociology at the Complutense University of Madrid (UCM)

Science Media Centre Spain

Is the research of good quality?

"This work is a rigorous warning about the plasticity of political attitudes toward the architecture and design of digital platforms. Furthermore, it shows that political priorities can be modified without altering party identities.

The most notable aspect in terms of research quality is that the authors present a randomized controlled trial to analyze a systemic risk, which is often presented as a concern, but with a complicated analysis of these characteristics. The research group used data triangulation, combining opinion surveys (subjective) with actual behavioral data on engagement (account tracking) from an extension. These measurements have made it possible to demonstrate how the algorithm alters the structure of an individual's social network, as well as their information flow.

How does this fit with existing evidence?

"Unlike other Meta studies that only analyzed turning off the algorithm, this design measures both the effect of activating it, switching from X's chronological mode to algorithmic mode, and turning it off. In this work, they have been able to capture the persistence of the effect generated by the algorithmic mode.“

The data is about users in the US. Could it be that in other countries the algorithm also directs users toward more conservative viewpoints?

”The measurements were taken during a politically charged year in the US, so the magnitude of the effects could vary in less polarized contexts or in other countries.

One thing that stands out is that the study finds a constant because the algorithm ‘silences’ traditional media, so the void is filled by more radical voices. If in a specific country the most active and polarizing voices are conservative, the algorithm will amplify them."

Are there any important limitations to consider?

"X's algorithm lacks transparency and is constantly changing (especially under Elon Musk's management), so the results could be different today or in a different political culture (e.g., in Europe or Latin America).

It is also unknown whether the effect found persists in the long term, whether it intensifies or dilutes. Furthermore, the study only analyzes people who already use X frequently and does not provide information on how the algorithm would affect someone new to the platform or the general population that does not use social media.

In short, this work is of particular interest now, in the midst of negotiations and controversies about the governance of digital platforms and social media, because it shows that algorithmic mediation, far from being neutral, favors specific narratives and alters the information diet of citizens".

The author has declared they have no conflicts of interest
EN

260218 X walter EN

Walter Quattrociocchi

Full Professor of Computer Science, Head of the Data Science and Complexity Lab, Sapienza Università di Roma
 

Science Media Centre Spain

The study is methodologically sound and represents one of the most rigorous independent field experiments on algorithmic feeds conducted to date. The randomised design and combination of survey data with behavioural traces provide credible causal evidence that exposure to X's algorithmic feed can influence certain political attitudes in a relatively short period of time. However, it is important to note that the observed effects appear to operate through increased engagement and amplification of content, rather than through direct ideological persuasion. The algorithm promotes highly engaging political content—which, in this specific context, happens to be more conservative—and users subsequently adapt their following behaviour, leading to persistent exposure effects.

In terms of the broader literature, these findings should be interpreted as complementary rather than contradictory to previous large-scale experiments on Meta platforms, which found limited political effects. The key difference lies in platform dynamics and business models: algorithmic systems are optimised for attention and engagement, not political outcomes. When engagement-based ranking amplifies already prominent political narratives, subsequent attitude changes may arise as a by-product of the attention economy rather than evidence of intrinsic ideological orientation.

A significant limitation is that the experiment focuses on active users in the US during a specific political period; therefore, the direction of ideological effects cannot be generalised to all countries or platforms. In other information ecosystems, engagement optimisation could plausibly amplify different political orientations depending on local media supply and user networks.

The author has declared they have no conflicts of interest
EN

Belén Laspra - X algoritmo EN

Belén Laspra

Assistant Professor in the Department of Philosophy at the University of Oviedo

Science Media Centre Spain

The research carried out by researchers at Bocconi University (Milan), the University of St. Gallen (Switzerland), the Paris School of Economics, and the École des Hautes Études en Sciences Sociales (Paris) constitutes one of the most solid pieces of experimental evidence to date on the relationship between recommendation algorithms and political attitudes on social media.

In the current context, marked by intense debates on disinformation, polarization, and algorithmic governance, this type of work is particularly necessary. This is not an observational analysis or a simple correlation between digital consumption and political preferences, but rather a study of nearly 5,000 active US users of X in 2023. From the point of view of internal validity, the work is rigorous and technically well-founded. They did not limit themselves to asking what participants thought, but examined what they saw and how they acted; in addition to measuring stated attitudes, the researchers made a remarkable methodological effort to analyze the content actually displayed by the algorithm and changes in users' actual behavior, such as the accounts they began to follow.

So what exactly does it prove? The study compares two configurations of the X feed: chronological mode (“Following”), which displays posts in chronological order from accounts already followed; and algorithmic mode (“For You”), which selects, reorders, and adds content based on internal relevance criteria. Activating the algorithmic feed for seven weeks produced statistically significant shifts in certain political attitudes toward more conservative positions, especially in public policy priorities, in the assessment of judicial investigations against Donald Trump, and in opinions regarding the war in Ukraine. However, it did not change partisan identification or affective polarization; that is, it did not change users' affiliation with Democrats or Republicans, nor did it increase the degree of sympathy or emotional rejection toward the other party.

While the algorithm did not transform established political identities or “convert” users into voters for another party, it does seem to have influenced specific positions within existing ideological frameworks. This suggests that the algorithm can modulate how certain issues are prioritized or how certain events are interpreted, but without altering underlying political identity. Therefore, for the moment, we are talking about specific attitudinal shifts, not massive reconfigurations of belief systems.The study introduces a breakthrough in relation to the evidence accumulated so far. Previous research had found that deactivating algorithms did not produce any noticeable political effects. The provisional conclusion was that algorithms did not seem to have a measurable direct political impact. This new work introduces a different thesis: that turning off the algorithm does not produce changes does not imply that the algorithm has not had an impact previously. According to the authors, initial exposure can influence who we decide to follow, thus persistently reconfiguring the user's information environment. The algorithm not only orders information; it can help shape the structure of the future exposure network. This persistence hypothesis is probably its most significant contribution.

To assess the public importance of the implications of this study, it is also necessary to gauge the scope of the platform. According to the Pew Research Center (2025), approximately 21% of adults in the United States claim to use X. This figure, which should be read with caution given the difficulty of accurately estimating the actual number of active users in an environment with fake accounts, intermittent inactivity, and continuous entries and exits, is equivalent in absolute terms to between 54 and 55 million Americans. When expressed in raw figures, the scale becomes more visible: we are talking about tens of millions of citizens in a democracy of 330 million inhabitants. Furthermore, the political influence of X is greater than that percentage suggests. Journalists, political leaders, and institutional officials are very present on the platform, transferring what happens there to traditional media, parliamentary debates, and other networks. As a result, content circulating within X can end up setting the general public agenda, even among people who do not directly use the platform. This amplification effect explains why a network with lower penetration than others can have considerable political weight.

In this context, the results of the study take on a broader dimension. If the algorithm can influence, even incrementally, how certain issues are prioritized or how certain events are interpreted within the platform, those small shifts are not necessarily confined to the digital environment. To the extent that X functions as a central node for the production and circulation of political discourse, algorithmic dynamics can indirectly influence the general media agenda. The persistence hypothesis does not point to a massive transformation of public opinion, but it does suggest that the architecture of the system can intervene in a space that has effects beyond its direct users.

Could the same thing happen in other countries? The answer requires caution. The data corresponds to the United States in 2023 and to a specific political context. The study shows that, in that specific period, the algorithm gave relatively higher priority to conservative content. In other political systems, with different ideological configurations, the direction of the effect could vary. There is insufficient empirical basis to automatically generalize the result to other countries. Precisely for this reason, a niche for comparative research opens up here to assess the extent to which these effects depend on technical design, political context, or other elements.

Therefore, certain limitations of the study, pointed out by the authors themselves, must be taken into account. Although the experimental design is robust, the existence of unobserved factors, such as personality traits or stronger ideological predispositions that influence both the initial choice of feed type and the response to treatment, cannot be completely ruled out. In addition, the sample is composed of active users; the effects could be smaller among occasional users, who receive a lower intensity of exposure.

In short, the research is rigorous and provides relevant evidence. It does not demonstrate massive manipulation or strong technological determinism, but it does suggest something more structural: that algorithmic architecture can have an incremental influence on the formation of specific political attitudes, especially when it intervenes in the selection of the sources that make up the user's information environment. Its importance lies in the finding that the design of digital infrastructures is part of the structural conditions in which political opinions are formed. When these infrastructures potentially reach more than 50 million citizens in a single democracy, the issue ceases to be merely technical and becomes a matter of public relevance.

The author has declared they have no conflicts of interest
EN
Publications
Journal
Nature
Publication date
Authors

Germain Gauthier et al. 

Study types:
  • Research article
  • Peer reviewed
The 5Ws +1
Publish it
FAQ
Contact