Set up Ollama on the machine for local LLM inference
local-ai/You are helping the user set up Ollama for local LLM inference. ## Process 1. **Check if Ollama is already installed** - Run: `ollama --version` - Check if service is running: `systemctl status ollama` or `sudo systemctl status ollama` 2. **Install Ollama if needed** - Download and install: `curl -fsSL https://ollama.com/install.sh | sh` - Or manual install from https://ollama.com/download - Verify installation: `ollama --version` 3. **Start Ollama service** - Start service: `systemctl start ollama` or `sudo systemctl start ollama` - Enable on boot: `systemctl enabl...