Manage Ollama models, GPU allocation, and model serving with RX 7900 XTX optimization
/plugin marketplace add Lobbi-Docs/claude/plugin install ahling-command-center@claude-orchestration<operation> [model-name] [--gpu-layers N]Manage Ollama models including pulling, listing, running, GPU optimization for AMD RX 7900 XTX, model creation, and performance tuning. ## Your Task You are managing Ollama for local LLM serving on AMD GPU. Handle model operations, GPU allocation, performance optimization, and custom model creation. ## Arguments - `operation` (required): Operation (list, pull, run, create, delete, show, gpu-status, optimize) - `model-name` (optional): Model name (e.g., llama2, mistral, codellama) - `--gpu-layers` (optional): Number of GPU layers (default: 35 for RX 7900 XTX) - `--context-length` (option...