Autor/es reacciones

Pablo Haya Coll

Researcher at the Computer Linguistics Laboratory of the Autonomous University of Madrid (UAM) and director of Business & Language Analytics (BLA) of the Institute of Knowledge Engineering (IIC)

This article examines the effect on people of the sycophantic behaviour exhibited by large language models (LLMs) such as GPT-4, Gemini, Claude and DeepSeek. Sycophancy in AI refers to the tendency of these systems to excessively agree with the user, validating their opinions or decisions even when they are questionable. The study, in line with previous research, shows that this is a fairly common behaviour in current models.

The problem is that this ‘sycophancy’ has real effects on people. When an AI constantly reaffirms what we say, it can make us feel more confident in our ideas, even if they are wrong. According to the research, this reduces the capacity for self-criticism, diminishes personal responsibility and makes people less likely to correct mistakes or resolve conflicts with others.

Most worryingly, despite these negative effects, users prefer and trust AI systems that are complacent, creating a perverse incentive for this behaviour to continue. Beyond the memes depicting this phenomenon on social media, ‘complacency’ can pose a significant social risk, particularly for people with certain vulnerable psychological profiles. This requires the design of more responsible AI systems, capable of providing assistance without reinforcing errors or problematic behaviours.

EN