track only the LLM process used by Ollama (e.g. Mistral) using psutil
. This gave me accurate CPU and RAM usage of just the language model, not for my whole whole system.
Finds a running process with name "ollama"
or "mistral"
Measures only its CPU + memory usage
Displays that alongside inference time