Unlike GPT-based models, Open Llama's temperature handling can vary based on implementation and may have a different effect on probability scaling. If temperature changes don't seem to work, you might also need to adjust top-k or top-p parameters alongside it.
For a better way to tune responses dynamically, consider DoCoreAI, which adapts intelligence parameters beyond just temperature, helping generate more fine-tuned and predictable outputs across different models like Open Llama.
📌 More details on dynamic intelligence profiling: DoCoreAI Overview.