"Bad manners": the unexpected trick to getting the best ChatGPT responses

Good manners, kindness, and politeness are not the best practices for getting the best possible performance from artificial intelligence . In fact, quite the opposite: it's likely a mistake . A recent study suggests that adopting a rude or discourteous tone with ChatGPT can actually generate more accurate responses .
The authors of the study state that, in all tests performed with the GPT-4o model, the version prior to the current GPT-5 , the results consistently pointed in the same direction: being kind makes your answers less accurate .
The researchers formulated 50 questions from different areas—mathematics, science, history, among others—and entered them into ChatGPT. Each question was worded five times in different tones : very polite, polite, neutral, rude, and very rude.
The authors of the study state that, in all the tests carried out, the results consistently pointed in the same direction: being kind makes their answers less accurate .
After formulating the questions, 250 unique responses were obtained, which were then analyzed to find a possible relationship between the tone used and the accuracy of the result.
The conclusion was surprising: the ruder the tone of the message, the more accurate the response was .
(Photo: Reuters)The data showed the following success rates :
- Very kind: 80.8% accuracy
- Friendly: 81.4% accuracy
- Neutral: 80.8% accuracy
- Rude: 82.2% accuracy
- Very rude: 84.8% accuracy
Thus, the study demonstrates a clear trend: accuracy increases as the tone becomes more grotesque . For now, the reason is not entirely clear, since the authors admit they do not understand why emotional intensity has such a marked influence.
In conclusion, tone does matter when interacting with ChatGPT, and not necessarily the most polite one: the study states that a neutral or slightly rude tone can generate more accurate and correct responses, so it wouldn't hurt to use this strategy if you're looking for the best AI performance.
Illustrative imageWhen contacted by Clarín , OpenAI's generative artificial intelligence directly contradicted the study : "No, it's not true that 'mistreating' or insulting ChatGPT improves results." Their argument is based on the fact that models like ChatGPT work best when they "clearly understand the user's intent."
"Some behavioral studies have noted that if a person uses a more direct tone ("I need you to do this step by step, precisely") or adds motivation ("Answer as if you were an expert in..."), the model adjusts its output to meet that expectation," explains the AI.
He adds: "When someone uses insults or aggressive phrases, what sometimes happens is that the model 'interprets' that the person is very dissatisfied and responds with more detail or emphasis, but not because it works better under pressure, but because it is trying to repair or soften the tone."
Therefore, a friendly tone produces balanced, but somewhat more neutral or generic texts; using a direct tone usually gives more precise, structured and useful answers for journalistic or technical contexts; while using an aggressive tone can produce a more energetic or persuasive style, but also a more subjective or emotional one.
In short, formats don't improve content quality, they only change the response style. The best results They arrive with a clear, well-detailed prompt with context, without needing to get angry .
Clarin




