79617971

Date: 2025-05-12 13:48:43
Score: 0.5
Natty:
Report link

The prompt you define in LangChain is just the content.

However, the chat template defines the structure - how that content is wrapped and presented to the model - and can strongly influence behavior.

This issue is likely caused by the chat template used with your model. Some templates are designed to encourage tool use. For example, the LLaMA 3.2 template includes the instruction:

"Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt."

This can effectively force the model to call tools, even when unnecessary.

To fix this, adjust your chat template to clearly state that tools should only be used when necessary, and that direct answers should be preferred when possible.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (0.5):
Posted by: guibs35