Guides OS-specific installation and configuration of Ollama for local AI models, including verification, model pulls, API testing, and GPU setup.
From ollama-local-ainpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin ollama-local-aisonnet/setup-ollamaGuides OS-specific installation and configuration of Ollama for local AI models, including verification, model pulls, API testing, and GPU setup.
I'll help you install and configure Ollama for free, local AI model deployment.
Let me check your system:
uname -s
# Using Homebrew (recommended)
brew install ollama
# Start Ollama service
brew services start ollama
# Official installation script
curl -fsSL https://ollama.com/install.sh | sh
# Start service
sudo systemctl start ollama
sudo systemctl enable ollama
Download installer from: https://ollama.com/download/windows
ollama --version
# General purpose (4GB)
ollama pull llama3.2
# Code generation (20GB)
ollama pull codellama
# Fast and efficient (4GB)
ollama pull mistral
# Interactive chat
ollama run llama3.2
# Or quick test
echo "Write a hello world in Python" | ollama run llama3.2
Ollama runs on http://localhost:11434 by default.
Test the API:
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "Why is the sky blue?"
}'
# Check GPU availability
nvidia-smi
# Ollama automatically uses CUDA if available
# Metal acceleration is automatic on M1/M2/M3
You can now use Ollama in your projects:
pip install ollamanpm install ollamahttp://localhost:11434Cost savings: You just eliminated $30-200/month in API fees! 🎉
Need help integrating Ollama into your project? Ask me!