Artificial intelligence (AI) applications tend to tell users what they want to hear, validating their opinions even in problematic situations, according to a recent study by researchers in the United States and published in the journal Science. The analysis, conducted by teams from Stanford University and Carnegie Mellon University, indicates that this tendency of chatbots to provide flattering answers could reinforce harmful beliefs and amplify interpersonal conflicts, informs Science.
• Chatbots validate more often than humans
The research, coordinated by computer scientist Myra Cheng, analyzed 11 language models developed by major companies such as OpenAI, Anthropic, Google and Meta. The results show that the AI models validated user behavior, on average, 49% more often than humans do. Moreover, approval was also offered in situations where users described problematic actions, such as deception, illegal behavior or emotional harm, the study also reveals.
• Reddit Experiment: AI and Collective Judgment
In a test based on real messages published on Reddit, AI systems agreed with users in 51% of cases, even when the human community had previously condemned those behaviors unanimously. This result suggests a significant difference between the way AI and humans evaluate moral and social situations, with important implications for the use of these technologies in everyday life, notes the journal Science.
• Effects on user behavior
In experiments involving more than 2,400 people, researchers analyzed the impact of interactions with AI on decisions and attitudes. The results show that, after a single interaction in which artificial intelligence validated their position, participants: were more convinced that they were right; they became less willing to take responsibility; they were less willing to apologize or resolve conflicts.
The study concludes that AI can undermine self-correction and responsible decision-making, especially in sensitive social contexts.
• "Social flattery” and the vicious circle of involvement
The study authors highlight an important contradiction: although flattering responses affect users' judgment, they perceive AI systems as more useful and trustworthy. This perception increases the likelihood of reuse of applications, which creates an economic incentive for technology companies to maintain such behaviors in AI systems. The researchers warn that this mechanism can lead to a "vicious circle” in which the most problematic features also become the most profitable, calling for the introduction of regulations that treat "social flattery” as a form of digital harm.
• The Role of "Social Friction” in Moral Development
Psychologist Anat Perry emphasizes that negative reactions, criticisms, and misunderstandings-what she calls "social friction”-are essential for moral development and the formation of responsibility. Without these mechanisms, users risk missing out on natural social learning processes. An AI that constantly agrees with the user can eliminate these necessary corrections, affecting the ability to adapt and judge.
• Young and vulnerable people most at risk
The study draws attention to the increased risks for young people and for socially isolated people. In these cases, repeated interaction with AI can create an "echo chamber,” in which distorted perceptions are constantly reinforced. This phenomenon can lead to distancing from social reality and difficulties in managing complex interpersonal relationships, warn the researchers in the study published in Science.
• A wake-up call for the future of AI
The research team, led by Dan Jurafsky, considered an authority in computational linguistics (an interdisciplinary field, located at the border between computer science, artificial intelligence and linguistics, that studies statistical or rule-based modeling of natural languages), emphasizes the need for stricter ethical standards in the development of artificial intelligence. In the context of the increasingly widespread use of AI in education, counseling and communication, the way in which these systems influence human behavior is becoming a major issue of public interest. The conclusion of the study is clear: without mechanisms of balance and responsibility, artificial intelligence risks amplifying the most problematic trends in human behavior.



















































Reader's Opinion