79190325

Date: 2024-11-14 20:31:32
Score: 0.5
Natty:
Report link

This setup worked for me:

docker-compose.yml

ollama:
  ...
  entrypoint: ["/entrypoint.sh"]
  volumes:
    - ...
    - ./entrypoint.sh:/entrypoint.sh
  ...

entrypoint.sh

Make sure to run sudo chmod +x entrypoint.sh

Adapted from @datawookie's script:

#!/bin/bash

# Start Ollama in the background.
/bin/ollama serve &
# Record Process ID.
pid=$!

# Pause for Ollama to start.
sleep 5

echo "Retrieving model (llama3.1)..."
ollama pull llama3.1
echo "Done."

# Wait for Ollama process to finish.
wait $pid

Why this approach?
By pulling the model in the entrypoint script rather than in the Docker image, you avoid large image sizes in your repository, storing the model in a volume instead for better efficiency.

Reasons:
  • Whitelisted phrase (-1): worked for me
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • User mentioned (1): @datawookie's
  • Low reputation (1):
Posted by: M E