Walter Quattrociocchi
Director of the Laboratory of Data and Complexity for Society at the University of Rome La Sapienza (Italy)
These studies are methodologically strong and highly relevant. They rely on very large samples, carefully designed experiments, and transparent outcome measures of opinion change following interaction with AI systems. The central result — that short conversations with large language models can produce measurable shifts in political attitudes — is robust across different contexts and datasets. The accompanying commentary in Science correctly emphasizes that these systems are not “superhuman persuaders” in a psychological sense, but are effective because they systematically generate dense streams of information, regardless of its truthfulness.
What these papers truly contribute is not simply the demonstration that AI can persuade, but an explanation of why it does so. The evidence shows that persuasion increases primarily through information density, not through personalization, emotional manipulation, or ideological targeting. The studies also show that post-training aimed at persuasiveness is substantially more important than model size. Most importantly, there is a clear and troubling trade-off: the same techniques that maximize persuasive impact systematically reduce factual accuracy. Persuasiveness and truthfulness do not grow together; they diverge. This means that the mechanism driving influence is not understanding, but volume and fluency.
This is where the deeper significance of these results becomes visible. These findings point toward what I call an “epistemic shift,” or Epistemia: a transformation in how knowledge operates in the public sphere. For decades, digital platforms primarily mediated information through filtering and ranking. Generative systems do something fundamentally different: they replace information retrieval with language synthesis. In doing so, they bypass the cognitive processes that normally structure judgment, verification, and evaluation. We are not dealing with machines that lie. We are dealing with systems that generate plausible language without performing any epistemic act at all.
The danger, therefore, is not only misinformation. It is something more structural. When information is generated rather than assessed, plausibility replaces judgment. These experiments show this displacement very clearly: participants are persuaded not by the quality of arguments, but by their quantity. Whether statements are true or false becomes secondary to the sheer accumulation of claims.
There are also important limits to underline. These experiments measure short-term shifts after brief interactions, whereas real-world exposure is continuous, immersive, and cumulative. For this reason, the measured effects should not be interpreted as upper bounds. If anything, they are likely conservative. Moreover, participants in these studies were aware that they were interacting with an AI. In everyday environments, where generative systems are integrated into search engines, messaging platforms, and productivity tools, contextual trust may amplify the effects further.
In summary, the core risk highlighted by this work is not merely that AI can influence opinions. It is that AI normalizes an informational environment where judgment is replaced by generation, and evaluation is replaced by fluency. This is not just a technological problem. It is an epistemic one — and these studies are among the first to demonstrate it empirically.