I have ollama running on this computer.
Benchmarks your hardware for local AI workloads, identifying your GPU and RAM. Provides recommendations for quantized models you can run without overloading your system.
/plugin marketplace add danielrosehill/linux-desktop-plugin/plugin install lan-manager@danielrosehillI have ollama running on this computer.
Please benchmark my hardware from the perspective of local AI workloads.
What GPU do I have? How much RAM? Give me approximations as to what kind of quantized models I can run on this hardware without imposing undue stress on the machine for other workloads.