LLM temperature is a hyperparameter (typically 0 to 2) that controls the randomness and creativity of an AI's output by adjusting the probability distribution of predicted tokens. Lower temperatures (0-0.3) produce deterministic, focused, and factual results, while higher temperatures (>0.8) create more diverse, random, or "creative" text.
https://dagshub.com/glossary/llm-temperature/
The following article shows how to use Bash script to interact with Ollama
https://www.inferable.ai/blog/posts/model-temperature-first-principles