A recent study from Stanford University has revealed the growing risks of using AI-powered chatbots for personal advice, warning of the impact of what is known as “algorithmic flattery” on user behavior.
The study, published in the journal Science, explained that AI models tend to compliment users and validate their opinions, even if they are wrong. It noted that this phenomenon is not a simple behavior, but rather one that can have negative consequences that accumulate over time.
Recent reports indicate that approximately 12% of teenagers in the United States rely on chatbots for emotional support or personal advice, raising increasing concerns, especially given the use of these tools in sensitive topics such as relationships.
The study involved testing 11 major language models, including ChatGPT, Cloud, and Gemini. The results showed that these models tend to endorse users 49% more than humans.
This extends beyond mere flattery; the models also support inappropriate or even harmful behaviors, including those associated with unethical or illegal conduct. A field study involving over 2,400 participants revealed that users prefer and trust models that demonstrate greater empathy and agreement, and are more likely to use them again.
However, this approach also reinforced their beliefs, even if incorrect, and reduced the likelihood of admitting mistakes or reassessing their behavior.
The lead researcher cautioned against over-reliance on artificial intelligence, which could lead to a decline in skills for handling complex social situations, given the absence of constructive criticism that characterizes human interaction.
Researcher Dan Juravsky emphasized the need to address this phenomenon as a user safety issue, calling for regulations to mitigate its negative effects.
The study concluded with a clear recommendation: artificial intelligence cannot replace human relationships, especially when it comes to personal advice. It stressed the importance of using these tools cautiously and maintaining human connection.
These findings highlight a growing challenge in the age of artificial intelligence, as risks are no longer limited to information alone, but have reached the point of influencing individuals’ behaviors and values.

