🌟 Overview
JARVIS OS is a hyper-localized, distributed orchestration engine designed to manage an entire home infrastructure, multi-agent reasoning systems, and autonomous trading bots entirely on-premise. It is built to run on multi-GPU Linux clusters, ensuring maximum privacy, zero cloud latency, and absolute sovereignty over data and AI execution.
Key Metrics
| Metric | Value |
|---|
| 🤖 Autonomous Agents | 928+ active MCP handlers |
| 🎙️ Voice Recognition | Real-time Whisper CUDA pipeline (<300ms latency) |
| ⚡ Total Commands | 2,658 natively recognized voice commands |
| 🔗 Automation | 835 self-healing "Domino" execution chains |
| 📈 Trading Consensus | 6-model AI consensus executing on MEXC Futures |
| 🧠 Compute Power | 6 NVIDIA GPUs / 46GB VRAM total |
🏗️ System Architecture (La Creatrice)
JARVIS OS operates on a distributed architecture, distributing cognitive load across specialized hardware nodes.
graph TD;
subgraph "JARVIS Cluster (Zero-Cloud)"
M1[M1: Primary Orchestrator<br>Ryzen 5700X3D<br>Voice & Main Logic] <-->|Sync| DB[(Distributed SQLite3)]
M2[M2: Deep Reasoning<br>3 GPUs - 24GB VRAM<br>DeepSeek-R1] <-->|Consensus| M1
M3[M3: Fallback Node<br>Redundancy & Backup] -.->|Failover| M1
OL1[OL1: Edge Worker<br>Lightweight Local Tasks] <-->|Task Exec| M1
end
User((User)) -->|Voice <300ms| M1
M1 -->|Trading Signals| MEXC[MEXC Exchange]
M1 -->|Automation| N8N[n8n Workflows]
The 9-Layer Cognitive Stack
- Hardware Layer: Multi-machine GPU cluster (CUDA, ROCm).
- OS Layer: Ubuntu/Debian optimized with custom kernel parameters.
- Data Layer: Distributed synchronized SQLite3 (
BASE-SQL3-COMMUNE).
- Inference Layer: LM Studio, Ollama, vLLM load balancing.
- Protocol Layer: Model Context Protocol (MCP) handlers.
- Agent Layer: Specialized agents (Trading, Coding, OS Management).
- Orchestration Layer: Claude Agent SDK / JARVIS Core.
- Pipeline Layer: Domino Chains (n8n, Python execution).
- Interface Layer: Real-time CUDA Whisper Voice Interface & WebSocket Dashboards.
🚀 Getting Started
Prerequisites
- OS: Linux (Ubuntu 22.04+ recommended)
- Hardware: Minimum 1x NVIDIA GPU with 8GB VRAM (6+ GPUs across cluster recommended for full swarm mode).
- Dependencies: Python 3.11+, CUDA Toolkit 11.8+, Docker.
Installation
-
Clone the OS repository:
git clone https://github.com/Turbo31150/jarvis-linux.git
cd jarvis-linux
-
Initialize the Cluster Environment:
Run the main initialization script to setup the Python virtual environment and install MCP core tools.
./scripts/init_cluster.sh
-
Configure Database Synchronization:
Ensure BASE-SQL3-COMMUNE is cloned in your shared storage directory and point the .env configuration to it.
cp .env.example .env
# Edit .env with your absolute paths and API keys (if relying on fallback providers)
-
Start the Orchestrator:
systemctl --user start jarvis-orchestrator
🎙️ Voice Subsystem (Whisper Flow)
JARVIS OS is primarily designed to be interacted with via voice. The voice pipeline relies on a highly optimized custom implementation of Whisper running on CUDA, ensuring a sub-300ms response time from spoken word to command execution.
To test the voice pipeline standalone:
python3 src/voice/whisper_listener.py --mic-index 1 --model large-v3 --cuda-optimize
🛡️ Security & Privacy
JARVIS OS is built on the philosophy of the Saaspocalypse — rejecting cloud-dependent, opaque SaaS platforms.
- Air-Gapped Capable: All critical reasoning runs locally. External internet is only required for execution targets (like MEXC trading or web scraping).
- Data Sovereignty: No telemetry, no cloud syncing of voice data, local SQLite database.
👨💻 Contributing