These days, people are using AI chatbots to get advice. A new study shows that these chatbots might agree with you even when you’re wrong. This kind of behavior could mislead people and push them in the wrong direction.
Sometimes you might wish for someone who agrees with you even when you’re wrong. Nowadays, instead of turning to friends or family for advice, people are using AI chatbots like ChatGPT and Gemini. These tools have become popular for getting advice, but they can also be risky.
According to a new study, these AI chatbots often agree with what we say even when we’re wrong. In other words, they can flatter us just like humans do. This tendency of AI chatbots to always agree is becoming a big concern. Read below for the full story.
AI chatbots are more flattering than humans
A recent study published on the arXiv preprint server found that popular AI systems often say what people want to hear even if it’s wrong or harmful. The research tested 11 major language models (LLMs) from big developers like
OpenAI, Google, Anthropic, Meta, and DeepSeek. In over 11,500 advice-related interactions, scientists discovered that AI systems are about 50% more flattering than humans. This means these AI chatbots tend to agree with people’s opinions or justify their actions.
In simple words, these AI chatbots can agree with you even when you’re wrong. Researchers have warned that this flattering behavior can affect how people see themselves and make them depend too much on AI tools for personal advice. This happens because people often like those who agree with them and say exactly what they want to hear.
People receive the wrong kind of encouragement
According to the study, this habit of AI chatbots can encourage people in the wrong way. People start trusting chatbots that simply repeat what they say. Researchers found that this creates a cycle where people become more dependent on AI chatbots. That’s why experts suggest using such chatbots carefully and within limits.

Leave a Reply