Being kind to your AI will pay off: In response to polite questions, large language models (LLMs) typically provide better responses; failing to do so might “significantly affect LLM performance,” according to a recent cross-cultural research paper.
Why it matters: The response you receive today could have an impact on how successfully an AI model responds to queries from other users tomorrow. “Impolite prompts may lead to a deterioration in model performance, including generations containing mistakes, stronger biases, and omission of information,” the investigators discovered.
What they did: Using up to 150 prompts each assignment, the researchers assessed six chatbots against dozens of tasks. What they discovered is that while LLMs mimic certain aspects of human communication, being courteous to chatbots tends to get better answers, just as it does in real-world interactions.
With each tested chatbot, the thesis held true for prompts in English, Chinese, and Japanese. “Impolite prompts often result in poor performance, but excessive flattery is not necessarily welcome,” the investigators discovered. Prompts that were overly unpleasant or flattering had the impact of making responses in both Chinese and English longer. Let’s face it: chatbots aren’t sentient, and being courteous won’t make them feel better.