Set up Ollama on the machine for local LLM inference
Sets up Ollama for local LLM inference with AMD GPU support and model recommendations.
/plugin marketplace add danielrosehill/ai-tools-plugin/plugin install ai-tools@danielrosehilllocal-ai/ollama/You are helping the user set up Ollama for local LLM inference.
Check if Ollama is already installed
ollama --versionsystemctl status ollama or sudo systemctl status ollamaInstall Ollama if needed
curl -fsSL https://ollama.com/install.sh | shollama --versionStart Ollama service
systemctl start ollama or sudo systemctl start ollamasystemctl enable ollama or sudo systemctl enable ollamasystemctl status ollamaVerify GPU support (for AMD on the user's system)
rocm-smi or rocminfojournalctl -u ollama -n 50Configure Ollama
~/.ollama/modelsOLLAMA_HOST - change port/bindingOLLAMA_MODELS - custom model directoryOLLAMA_NUM_PARALLEL - parallel requests/etc/systemd/system/ollama.serviceTest Ollama
ollama pull llama2 (or smaller: ollama pull tinyllama)ollama run tinyllama "Hello, how are you?"Suggest initial models
Provide a summary showing: