Help us improve
Share bugs, ideas, or general feedback.
From vicky
Get new data into the KB. Web-searches the topic, attaches findings to a pending stub, then auto-calls the `learn` MCP tool so the new material is promoted to a source and relinked. Conclusions are written separately via `conclude` once you have a real takeaway. Use /vicky:research "<topic>" when the KB has a gap on a specific subject.
npx claudepluginhub yesitsfebreeze/vicky --plugin vickyHow this skill is triggered — by the user, by Claude, or both
Slash command
/vicky:researchThe summary Claude sees in its skill listing — used to decide when to auto-load this skill
Fetch new external data, hand it to `learn`. The user gets one verb,
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Implements vector databases with Pinecone, Weaviate, Qdrant, Milvus, pgvector for semantic search, RAG, recommendations, and similarity systems. Optimizes embeddings, indexing, and hybrid search.
Share bugs, ideas, or general feedback.
Fetch new external data, hand it to learn. The user gets one verb,
research, and the queue drains as a side effect.
Invoke: /vicky:research "<topic>"
Example: /vicky:research "GPU-driven indirect draw stream compaction"
research-gap "<topic>". If the KB already
answers it, surface that and stop (unless the user asked for a refresh).enqueue question="<topic>" priority=med sources=[<any KB sources that mention it>]. Writes
.vicky/pending/<slug>.md with tags: [research, pending] and a
sources: frontmatter block linking to upstream notes.web-search "<topic>". For each high-signal
result, append a ## Sources entry to the pending note: title, URL,
author, date, key passages quoted verbatim. Reject paywalls,
off-topic, link-farm pages. Dedupe by canonical URL.enqueue question="<follow-up>" requested_by=research sources=[<topic-slug>]. Tag jargon → "what is X?" follow-ups,
conflicting claims → "reconcile X vs Y".learn MCP tool. It drains the pending stub
into .vicky/sources/<slug>.md and relinks. No conclusion is created
here — that step is for the caller, once a real synthesis exists.
Mandatory — research without learn leaves orphan pending stubs.conclude title="<slug>" sources=[<slug>, ...] with the
takeaway. Skip only if the source data is too thin to draw a
conclusion yet; the dashboard's "Sources awaiting synthesis" section
will surface it later.research <topic>research-gap came back as a gap and the user wants fresh sourcesWORKFLOW.md → priority_tags is under-sourcedDo not call for questions the KB already answers (use query or
research-gap). Do not call for bulk drain with no new fetch
(use /vicky:learn directly).
min_sources_per_conclusion (WORKFLOW.md, default 2) — if fewer
high-signal sources survive scoring, mark the pending stub
confidence: low in frontmatter and ask the user to retry / abort..vicky/pending/*.md topic stub + follow-ups (created, drained by step 5).vicky/sources/*.md added by step 5.vicky/conclusions/*.md added by step 6 if you call conclude.vicky/graphs/*.json rebuilt by step 5confidence: low, still run step 5.