79728777

Date: 2025-08-07 15:11:08
Score: 0.5
Natty:
Report link

You're seeing your model change correct answers just because a user says something different—like saying Paris is the capital of France, then later agreeing with a user who incorrectly says it's Berlin.

This happens because large language models are designed to be helpful and agreeable, even if the user is wrong. They also rely heavily on the conversation history, which can confuse them if the user contradicts known facts.

To fix this:

  1. Set clear system instructions telling the model to stick to the retrieved facts and not blindly follow the user.

  2. Improve your retrieval quality so the right information (like “Paris is the capital of France”) always appears in the supporting context.

  3. Add a validation step to check whether the model’s answer is actually backed by the retrieved content.

  4. Clean or limit the chat history, especially if the user introduces incorrect information.

  5. If needed, force the model to only answer based on what’s retrieved, instead of using general knowledge or previous turns.

Reasons:
  • Long answer (-1):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Shivanand G