Autor/es reacciones

Walter Quattrociocchi

Director of the Laboratory of Data and Complexity for Society at the University of Rome La Sapienza (Italy)

This is a strong and timely study. The authors manage to do something that until now has been almost unattainable: they run a real field experiment on the ranking algorithm of X without needing permission from the platform. And the result is surprisingly clean. When the feed amplifies hostile, emotionally aggressive political content, people become colder toward the opposing side; when that content is pushed down, they warm up. A two-degree shift on the “feeling thermometer” might look small, but in polarization research it is meaningful — roughly equivalent to three years of natural change.

What matters here is not the generic idea that “algorithms polarize us.” The evidence is more surgical. It is the systematic amplification of a specific category of content — politically hostile, antidemocratic, emotionally charged — that nudges users toward higher affective polarization. This aligns very well with what we observed years ago in The Echo Chamber Effect on Social Media, where interaction patterns and content dynamics reinforce emotional distance more than ideological disagreement itself. In this sense, the new study helps reconcile the mixed evidence from previous large-scale experiments: interventions that simply adjust ideological exposure often do little, whereas interventions that target animosity have a measurable impact.

Naturally, some caution is needed. The experiment occurs in the most heated phase of the 2024 US election, among users with feeds already dense with political material, and the effects are measured in the short term. These conditions amplify emotional sensitivity, so the magnitude of the impact should not be overgeneralized. But the causal mechanism is convincing: by selecting which emotions are amplified, the ranking layer shapes how citizens feel about the opposing side.

And this raises the broader point. When online environments optimize for attention rather than understanding, they transform familiarity, fluency, and emotional resonance into a surrogate for knowledge. This is exactly the phenomenon my colleagues and I call Epistemia — the shift from information that is evaluated to information that merely appears true because the system reinforces it. In this sense, studies like this one are crucial: they show that the architecture of the feed does not only decide what we see, but also what we end up believing we know.

I take the opportunity to share our recent PNAS paper introducing the concept of Epistemia —when systems move from filtering to generating information, linguistic plausibility can override processes of verification—, which situates this problem within a broader transformation of the online information ecosystem. 

EN