Creates Jupyter notebooks for FiftyOne workflows including getting-started guides, tutorials, recipes, and full ML pipelines. Use when creating notebooks, writing tutorials, building demos, or generating FiftyOne walkthroughs covering data loading, exploration, inference, evaluation, and export.
Creates Jupyter notebooks for FiftyOne workflows including tutorials, recipes, and full machine learning pipelines.
npx claudepluginhub voxel51/fiftyone-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
GETTING-STARTED-TEMPLATES.mdNOTEBOOK-STRUCTURE.mdRECIPE-TEMPLATES.mdTUTORIAL-TEMPLATES.mdALWAYS follow these rules:
Classify the user's request before anything else:
| User Request Pattern | Type | Template |
|---|---|---|
| "getting started", "beginner", "intro", "first notebook" | Getting Started | GETTING-STARTED-TEMPLATES.md |
| "tutorial", "how to use X", "deep dive", "demonstrate X" | Tutorial | TUTORIAL-TEMPLATES.md |
| "recipe", "quick", "snippet", "how do I X" | Recipe | RECIPE-TEMPLATES.md |
| "full pipeline", "end to end", "ML pipeline", "complete workflow" | Full Pipeline | GETTING-STARTED-TEMPLATES.md with all stages |
If ambiguous, ask the user.
Use the Write tool to create a valid empty .ipynb file first:
{
"nbformat": 4,
"nbformat_minor": 2,
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.10.0"
}
},
"cells": []
}
Then use NotebookEdit with edit_mode: "insert" to add cells.
All generated code must use standard FiftyOne import aliases:
import fiftyone as fo
import fiftyone.zoo as foz
import fiftyone.brain as fob
import fiftyone.types as fot
from fiftyone import ViewField as F
See the "Code Pattern Sources" table below for full conventions.
Use NotebookEdit with edit_mode: "insert" for every cell. Build top-to-bottom using cell_id chaining: insert the first cell without cell_id (inserts at beginning), then read the notebook to get its cell_id, and insert each subsequent cell with cell_id set to the previous cell's ID. This ensures correct ordering.
Critical: Without cell_id, every insert goes to the beginning of the notebook, resulting in reversed cell order.
Never write the entire .ipynb JSON at once. Incremental cell insertion allows verification and correction.
Every notebook must include at least one cell with:
session = fo.launch_app(dataset)
FiftyOne's core value is visual exploration. Notebooks without App visualization miss the point.
For getting-started and tutorial notebooks, default to foz.load_zoo_dataset() so users can run the notebook without bringing their own data. For recipes and custom pipelines, support both zoo and user data.
Never place two code cells in a row without a markdown cell in between. The markdown cell explains what the code does and why. This is critical for tutorials and getting-started guides. Recipes are exempt — consecutive code cells are permitted for brevity (imports + load data, core solution + verify).
Before creating any cells, draft a notebook outline showing:
Get user approval before generating.
When generating code, fetch https://docs.voxel51.com/llms.txt for the latest FiftyOne API patterns. This ensures generated code uses current APIs and avoids deprecated patterns.
After all cells are inserted, read the notebook back with the Read tool to verify:
Gather information before designing the notebook:
foz.load_zoo_dataset("coco-2017", ...)fo.Dataset.from_dir(...)foz.load_zoo_dataset("https://huggingface.co/datasets/...", ...)foz.load_zoo_dataset("quickstart")| Stage | Description | Always Include? |
|---|---|---|
| Data Loading | Load/create dataset | Yes |
| Exploration | App, stats, filtering | Yes |
| Brain Methods | Embeddings, uniqueness, duplicates | If relevant |
| Inference | Model predictions | If relevant |
| Evaluation | Metrics, TP/FP/FN analysis | If predictions + ground truth |
| Export | Save to format | Optional |
https://docs.voxel51.com/llms.txt using the WebFetch tool for current FiftyOne API patterns. This is the authoritative source for SDK usage.0. [markdown] Title + description
1. [markdown] What You Will Learn
2. [code] pip install
3. [markdown] ## Setup
4. [code] Imports
5. [markdown] ## Load Dataset
6. [code] Load from zoo + print dataset info
...
# Use Write tool
Write(
file_path="/path/to/notebook.ipynb",
content='{"nbformat": 4, "nbformat_minor": 2, "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"name": "python", "version": "3.10.0"}}, "cells": []}'
)
Ask the user for the notebook file path, or suggest a default based on the title.
cell_id chaining:Insert the first cell without cell_id (goes to beginning), then read the notebook to get the assigned cell_id. Each subsequent cell uses cell_id of the previous cell:
# First cell — no cell_id, inserts at beginning
NotebookEdit(
notebook_path="/path/to/notebook.ipynb",
new_source="# Title\n\nDescription paragraph.",
cell_type="markdown",
edit_mode="insert"
)
# Read notebook to get the first cell's ID (e.g., "cell-0")
Read(file_path="/path/to/notebook.ipynb")
# Second cell — chain after cell-0
NotebookEdit(
notebook_path="/path/to/notebook.ipynb",
cell_id="cell-0",
new_source="!pip install -q fiftyone",
cell_type="code",
edit_mode="insert"
)
# Third cell — chain after cell-1
NotebookEdit(
notebook_path="/path/to/notebook.ipynb",
cell_id="cell-1",
new_source="import fiftyone as fo",
cell_type="code",
edit_mode="insert"
)
Continue chaining: each new cell's cell_id = previous cell's ID (cell-0, cell-1, cell-2, ...).
Cell content guidelines:
print() liberally so users see output# Expected output: comments where helpful.ipynb file. Verify:
fo, foz, fob, fot, F)dataset variable used consistentlypapermill to avoid modifying the user's system Python:
python -m venv .notebook-test-env
.notebook-test-env/bin/pip install -q papermill ipykernel anywidget
.notebook-test-env/bin/python -m ipykernel install --user --name python3 --display-name "Python 3"
.notebook-test-env/bin/papermill notebook.ipynb notebook_output.ipynb
Fix any runtime errors, then re-run until all cells pass. Clean up after: rm -rf .notebook-test-env. Common issues:
session.view requires a DatasetView, not a Dataset — use dataset.view()anywidget in headless environmentsjupyter notebook path/to/notebook.ipynb)See NOTEBOOK-STRUCTURE.md for detailed cell structure patterns for each pipeline stage, including cell shapes and markdown patterns.
For actual code patterns, fetch https://docs.voxel51.com/llms.txt as the authoritative source. Related skills provide additional context for specific pipeline stages:
| Pipeline Stage | Related Skill |
|---|---|
| Imports & conventions | fiftyone-code-style |
| Data loading | fiftyone-dataset-import |
| Inference | fiftyone-dataset-inference |
| Evaluation | fiftyone-model-evaluation |
| Export | fiftyone-dataset-export |
| Embeddings & visualization | fiftyone-embeddings-visualization |
| Duplicates | fiftyone-find-duplicates |
User says: "Create a getting-started notebook for object detection"
Notebook outline (21 cells):
| Cell | Type | Content |
|---|---|---|
| 0 | markdown | # Getting Started with Object Detection in FiftyOne |
| 1 | markdown | What You Will Learn (bullet list) |
| 2 | code | !pip install fiftyone ultralytics |
| 3 | markdown | ## Setup |
| 4 | code | Imports (fo, foz, F) |
| 5 | markdown | ## Load Dataset |
| 6 | code | dataset = foz.load_zoo_dataset(...) + print(dataset) + print(dataset.first()) |
| 7 | markdown | ## Explore in the FiftyOne App |
| 8 | code | session = fo.launch_app(dataset) |
| 9 | markdown | ## Understand the Data |
| 10 | code | dataset.count_values("ground_truth.detections.label") |
| 11 | markdown | ## Run Model Inference |
| 12 | code | model = foz.load_zoo_model(...) + dataset.apply_model(...) + session.view = dataset.view() |
| 13 | markdown | ## Evaluate Predictions |
| 14 | code | dataset.evaluate_detections(...) + results.print_report() |
| 15 | markdown | ## Analyze Errors |
| 16 | code | Evaluation patches: dataset.to_evaluation_patches("eval"), filter to FP |
| 17 | markdown | ## Export for Training |
| 18 | code | dataset.export(...) to YOLOv5 format |
| 19 | markdown | ## Conclusion + Next Steps |
| 20 | code | Cleanup: fo.delete_dataset(...) |
User says: "Write a tutorial on finding annotation mistakes with FiftyOne"
Notebook outline (29 cells):
| Phase | Cells | Content |
|---|---|---|
| Introduction | 0-2 | Title, problem statement (why annotation quality matters), learning goals |
| Setup | 3-4 | pip install, imports |
| Data | 5-8 | Load detection dataset, inspect schema, explore class distribution, launch App |
| Concept | 9-10 | Explain mistakenness: what it is, how it works, why embeddings help |
| Compute | 11-14 | Compute embeddings, compute mistakenness, view mistakenness distribution |
| Explore | 15-18 | Sort by mistakenness, view worst samples in App, tag suspicious annotations |
| Hardness | 19-22 | Compute hardness, compare with mistakenness, find ambiguous samples |
| Action | 23-25 | Filter to flagged samples, export for re-annotation |
| Conclusion | 26-28 | Summary, key takeaways, next steps, cleanup |
User says: "Quick recipe to export my dataset to COCO format"
Notebook outline (7 cells):
| Cell | Type | Content |
|---|---|---|
| 0 | markdown | # Export a FiftyOne Dataset to COCO Format + one sentence description |
| 1 | code | import fiftyone as fo + import fiftyone.types as fot |
| 2 | code | dataset = fo.load_dataset("my-dataset") |
| 3 | markdown | Brief explanation of COCO format |
| 4 | code | dataset.export(export_dir="/tmp/coco-export", dataset_type=fot.COCODetectionDataset, label_field="ground_truth") |
| 5 | code | Verify: !ls /tmp/coco-export/ |
| 6 | markdown | Variations: export with filters, export labels only, export to YOLOv5 |
User says: "Create a complete ML pipeline notebook"
Notebook outline (31 cells):
| Phase | Cells | Content |
|---|---|---|
| Title | 0-1 | Title + learning goals |
| Setup | 2-3 | pip install + imports |
| Load | 4-6 | Load dataset, inspect, print schema |
| Explore | 7-10 | Launch App, class distribution, sample statistics, filtering |
| Deduplicate | 11-13 | Compute embeddings, find near-duplicates, remove duplicates |
| Infer | 14-16 | Load model, apply to dataset, view predictions in App |
| Evaluate | 17-21 | Run evaluation, print report, confusion matrix, PR curves, patches |
| Visualize | 22-24 | Compute UMAP visualization, explore embedding space, find clusters |
| Export | 25-27 | Export curated dataset, export to training format |
| Conclusion | 28-30 | Summary, next steps, cleanup |
Error: "Cells appear in wrong order"
cell_id — each insert without cell_id goes to the beginning of the notebook, resulting in reversed ordercell_id chaining (see Directive 4). Insert first cell without cell_id, read notebook to get its ID, then chain each subsequent cell after the previous one. Use edit_mode: "replace" to fix individual cells.Error: "Empty notebook after generation"
.ipynb JSON includes "cells": [] and "nbformat": 4Error: "Import errors in generated code"
https://docs.voxel51.com/llms.txt before generating code. Use standard aliases: fo, foz, fob, fot, F. Check that pip install cell includes all required packages.Error: "Generated code uses deprecated APIs"
https://docs.voxel51.com/llms.txt for current API. Common deprecation: dataset.count("field") → dataset.count_values("field")Error: "Notebook doesn't render in Jupyter"
.ipynb JSON structure"nbformat": 4 is set and "cells" is an array, not nullprint() and display so users see intermediate resultsfo.launch_app() call per notebookActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user wants to "create a skill", "add a skill to plugin", "write a new skill", "improve skill description", "organize skill content", or needs guidance on skill structure, progressive disclosure, or skill development best practices for Claude Code plugins.