By parslee-ai
Harness multi-agent semantic reasoning with persistent memory to deliver code suggestions, architectural guidance, optimization recommendations, code reviews with vulnerability detection, performance analysis, debugging strategies, and pattern extraction for any development task or file.
npx claudepluginhub parslee-ai/neoAsk Neo for semantic reasoning and code suggestions
Get Neo's code review with semantic analysis
Get optimization suggestions from Neo
Get architectural guidance from Neo on design decisions
Get debugging assistance from Neo
Extract reusable patterns with Neo
Self-evolving Claude Code system that learns from corrections, manages context, and improves every session
Delegate tasks to Codex, Gemini, and OpenCode AI agents via Owlex MCP
Automatic code review, adversarial review, and rescue via Codex.
Deep analysis mode with extended reasoning for complex problems
Implementation of the Ralph Wiggum technique - continuous self-referential AI loops for interactive iterative development. Run Claude in a while-true loop with the same prompt until task completion.
Multi-AI orchestration pipeline with Task-based enforcement and Codex final gate
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.

A self-improving code reasoning engine that learns from experience using persistent semantic memory. Neo uses multi-agent reasoning to analyze code, generate solutions, and continuously improve through feedback loops.

If you've been Vibe Coding, then Vibe Planning, then Context Engineering, and on and on, you have likely hit walls where the models are both powerful and limited, brilliant and incompetent, wise and ignorant, humble yet overconfident.
Worse, your speedy AI Code Assistant sometimes goes rogue and overwrites key code in a project, or writes redundant code even after just reading documentation and the source code, or violates your project's patterns and design philosophy.... It can be infuriating. Why doesn't the model remember? Why doesn't it learn? Why can't it keep the context of the code patterns and tech stack? ... -> This is what Neo is designed to solve.
Neo is the missing context layer for AI Code Assistants. It learns from every solution attempt, using vector embeddings to retrieve relevant patterns for new problems. It then applies the learned patterns to generate solutions, and continuously improves through feedback loops.
Fact-Based Learning: Neo builds a semantic memory of facts — constraints, architectural decisions, patterns, review learnings, decisions, known unknowns, and failures — using vector embeddings for retrieval.
Code-First Output: Instead of generating diffs that need parsing, Neo outputs executable code blocks directly, eliminating extraction failures.
Scoped Storage: Facts are scoped to global, organization, or project level, stored locally in ~/.neo/facts/ for privacy and offline access.
Model-Agnostic: Works with OpenAI, Anthropic, Google, local models, or Ollama via a simple adapter interface.
User Problem → Neo CLI → Semantic Retrieval → Reasoning → Code Generation
↓
[Vector Search]
[Pattern Matching]
[Confidence Scoring]
↓
Executable Code + Memory Update
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim