Use this skill when the user mentions "Computer", "LCARS", "Enterprise computer", "start the computer", "computer analyze", "computer search", "computer transcribe", "captain's log", "monitor", "briefing", "compare", "summarize", "explain", "translate", "knowledge", "remember", "pipeline", "export", "report", or references the Star Trek computer interface. Provides operational knowledge for the Computer plugin system including server management, API endpoints, agent integration, data storage, cross-referencing prior results, desktop notifications, smart voice routing, and proactive suggestions.
From computernpx claudepluginhub chendren/computerThis skill uses the workspace's default tool permissions.
references/chart-patterns.mdreferences/lcars-design.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
The Computer plugin provides a Star Trek LCARS-themed interface for voice transcription, AI analysis, chart generation, web search, monitoring, logging, comparison, summarization, translation, explanation, knowledge management, workflow pipelines, and export/reports.
/computer:computer or bash "${CLAUDE_PLUGIN_ROOT}/scripts/start.sh"/computer:computer stop/computer:statusAll POST endpoints broadcast to connected WebSocket clients for real-time display.
| Method | Path | Purpose |
|---|---|---|
| GET | /api/health | Server health check |
| GET/POST | /api/transcripts | Store + display transcripts |
| GET/POST | /api/analyses | Store + display analyses (+ desktop notification) |
| POST | /api/charts | Render Chart.js visualization |
| POST | /api/search-results | Display search results |
| GET/POST | /api/logs | Captain's log entries |
| GET/POST | /api/monitors | Monitor status tracking (+ alert notifications) |
| GET/POST | /api/comparisons | Side-by-side comparisons |
| GET/POST | /api/knowledge | Knowledge base entries (LanceDB vector storage) |
| POST | /api/knowledge/search | Semantic search with method selection |
| DELETE | /api/knowledge/:id | Remove knowledge entry and all chunks |
| POST | /api/knowledge/bulk | Bulk ingest multiple documents |
| GET | /api/knowledge/stats | Knowledge base statistics |
| POST | /api/tts/speak | Generate spoken response |
| POST | /api/claude/query | Stream Claude response (SSE) |
| POST | /api/transcribe/file | Upload audio for Whisper transcription |
| Command | Purpose |
|---|---|
/computer:computer | Launch/stop server |
/computer:analyze | Sentiment, topics, entities, action items |
/computer:search | Web search with synthesis |
/computer:transcribe | Whisper audio transcription |
/computer:status | System diagnostics |
/computer:compare | Side-by-side comparison of files/text |
/computer:summarize | Multi-level document summarization |
/computer:monitor | Set up watches on URLs/files/processes |
/computer:log | Captain's log entries |
/computer:brief | Activity briefing and status report |
/computer:pipeline | Chain multiple operations in sequence |
/computer:know | Store, retrieve, and search knowledge base |
/computer:export | Generate formatted reports (markdown/html/json) |
| Agent | Model | Purpose |
|---|---|---|
| analyst | Opus | Sentiment, topics, action items, summaries |
| researcher | Sonnet | Web search and information synthesis |
| visualizer | Sonnet | Chart.js config generation with LCARS colors |
| transcription-processor | Sonnet | Transcript cleanup and structuring |
| comparator | Opus | Side-by-side comparison with radar charts |
| summarizer | Opus | Multi-level summarization (executive → detailed) |
| monitor | Sonnet | Continuous monitoring and alerting |
| translator | Sonnet | Multi-language translation with cultural context |
| explainer | Opus | Layered explanations (ELI5 → deep dive) |
| pipeline | Opus | Workflow orchestration chaining operations |
| knowledge | Opus | Persistent knowledge store, retrieve, synthesize |
| Panel | Purpose |
|---|---|
| Dashboard | Bridge console — system stats, active monitors, recent logs, activity feed |
| Main | Conversation with Claude |
| Transcript | Live voice transcription + file upload |
| Analysis | Sentiment, topic, entity analysis results |
| Charts | Chart.js visualizations |
| Search | Web search results |
| Log | Captain's log entries with stardates |
| Monitor | Active monitor cards with status and history |
| Compare | Side-by-side comparative analysis |
| Knowledge | Searchable knowledge base with confidence levels and tags |
macOS desktop notifications fire automatically for:
When the user speaks via microphone, the input is analyzed for command intent:
/computer:analyze/computer:search/computer:compare/computer:summarize/computer:monitor/computer:translate/computer:explain/computer:log/computer:know/computer:status/computer:briefA green "Routed" badge appears in the UI when voice routing is active.
Primary: #FF9900, #CC99CC, #9999FF, #FF9966, #CC6699, #99CCFF, #FFCC00 Background: #000000, Text: #FF9900, Grid: #333333
JSON files in ${CLAUDE_PLUGIN_ROOT}/data/:
transcripts/ — Voice and file transcriptsanalyses/ — Analysis results, summaries, explanations, translationssessions/ — Conversation logslogs/ — Captain's log entriesmonitors/ — Monitor configurations and statuscomparisons/ — Comparison resultsvectordb/ — LanceDB vector database (knowledge embeddings, chunks, metadata)When the user references earlier work ("what did we find about X?", "in the last analysis...", "compare this to what we found before"), you should:
curl -s http://localhost:3141/api/analyses | jq '.[0:5]'
curl -s http://localhost:3141/api/logs
curl -s http://localhost:3141/api/transcripts
curl -s http://localhost:3141/api/knowledge
Based on context, suggest relevant follow-up actions:
From any command or agent, push data to the running UI by writing JSON to a temp file and using curl:
# Write JSON (avoids shell escaping issues)
# Then POST with file reference:
curl -X POST http://localhost:3141/api/analysis -H 'Content-Type: application/json' -d @/tmp/computer-result.json
curl -X POST http://localhost:3141/api/charts -H 'Content-Type: application/json' -d @/tmp/computer-chart.json
curl -X POST http://localhost:3141/api/search-results -H 'Content-Type: application/json' -d @/tmp/computer-search.json
curl -X POST http://localhost:3141/api/logs -H 'Content-Type: application/json' -d @/tmp/computer-log.json
curl -X POST http://localhost:3141/api/monitors -H 'Content-Type: application/json' -d @/tmp/computer-monitor.json
curl -X POST http://localhost:3141/api/comparisons -H 'Content-Type: application/json' -d @/tmp/computer-comparison.json
curl -X POST http://localhost:3141/api/knowledge -H 'Content-Type: application/json' -d @/tmp/computer-knowledge.json
The knowledge base uses LanceDB for vector storage with Ollama nomic-embed-text (768-dim) for embeddings.
fixed — N-character chunks with overlapsentence — Split on sentence boundaries (default for short facts)paragraph — Split on double newlines (default for medium text)sliding — Fixed window with configurable stepsemantic — Split when embedding similarity drops below thresholdrecursive — Split by headers, then paragraphs, then sentences (default for long docs)vector — Cosine similarity nearest neighborskeyword — BM25-style text searchhybrid — Combined vector + keyword (default, best general-purpose)mmr — Maximal Marginal Relevance (diversity-promoting)multi_query — Multiple query variations merged via Reciprocal Rank Fusion{
"text": "The content to store",
"title": "Optional title",
"source": "user|analysis|search|monitor|import",
"confidence": "high|medium|low",
"tags": ["tag1", "tag2"],
"chunk_strategy": "paragraph",
"chunk_options": {}
}
{
"query": "search text",
"method": "hybrid",
"limit": 10,
"metadata_filter": { "source": "user", "confidence": "high", "tags": ["tag"] }
}
For short status messages, include "speak": true in status broadcasts to have the Computer speak:
curl -X POST http://localhost:3141/api/tts/speak -H 'Content-Type: application/json' -d '{"text":"Analysis complete."}'
Max 300 characters. Only use for brief acknowledgements, not full results.