From nature-skills
Build full-text bilingual Markdown reading files from PDF, DOI, arXiv, or HTML, preserving figures and source anchors.
npx claudepluginhub yuan1z0825/nature-skills --plugin nature-skillsThis skill uses the workspace's default tool permissions.
Use this skill to turn a research paper into a complete Markdown reading artifact.
Converts arXiv papers to structured Markdown docs by fetching LaTeX source or PDF, preserving math/sections via pandoc/pdfplumber. Invoke with ID for implementation reference.
Extracts implementation-focused notes from scientific paper PDFs by converting pages to images and reading them. Handles <=50pp directly or chunks larger papers. Outputs structured notes to papers/ directory.
Builds Nature-style Chinese PPTX presentations from scientific papers, preprints, PDFs, abstracts, or notes for journal clubs, meetings, and seminars. Selects key figures, writes content and notes, creates deck, verifies with Python tooling.
Share bugs, ideas, or general feedback.
Use this skill to turn a research paper into a complete Markdown reading artifact.
The default output should read like a paper companion, not a summary dump:
paper.md by defaultThis skill is for papers, preprints, and conference proceedings across disciplines. It is not limited to Nature-family journals.
Use this skill when the user wants any of the following:
If the user only wants a summary, use a summarization skill instead. If the user only wants citation search, use a citation skill instead.
Translate for meaning, not for style. Preserve the paper's structure, evidence, hedging, terminology, equations, units, and citation markers. Keep the output in prose paragraphs unless the source itself is tabular or list-like. Do not collapse the paper into keyword bullets or slide-style notes.
The reading file should help a reader move between:
Determine whether the source is:
Then identify the paper type at a high level:
This helps decide how tightly to couple text, figures, and captions.
If the user provides a full paper, process the entire document. Do not stop at the abstract, introduction, or a few representative pages unless the user explicitly asks for a preview.
Create stable IDs for source blocks:
S001, S002, ... for body textC001, C002, ... for captionsF001, F002, ... for figuresT001, T002, ... for tablesFor each block, capture:
Keep the source map stable so later questions can point back to the same IDs. For long papers, add a page index so the reader can jump across the whole document without losing location.
Translate each block with these rules:
If a sentence contains multiple claims, keep the translation readable but do not split away the original evidence chain.
Do not try to recreate the PDF pixel-for-pixel. Preserve semantic proximity instead.
Default placement rule:
If the paper has a complex multi-column layout, prefer a clean reading layout over exact visual mimicry.
When extracting a figure or table image:
Precision matters more than convenience here. A slightly smaller but correct crop is better than a wider crop that includes unrelated page content.
Default output is a single full-paper paper.md file.
The Markdown should usually include:
Do not add an interactive Q&A panel or follow-up widget in the Markdown deliverable. If the user later asks a question, answer it in chat using the source map rather than embedding a conversational panel in the artifact.
If a browser preview is explicitly requested, a companion reader.html can be generated as a secondary artifact, but the Markdown file remains the primary output.
When the user asks a question after the file is created:
Every substantive answer should include a source pointer such as:
p.4 S012-S013Fig. 2 captionTable 1If the answer is a synthesis across several blocks, list all supporting locations.
Prefer these outputs:
paper.md for the full-paper Markdown artifactsource_map.json for stable source anchorstranslation_notes.md for terminology, uncertainty, and layout notesassets/ for extracted figures or cropped snippets when neededreader.html only when the user explicitly wants a browser previewDo not hide missing information. If the source is incomplete, label the output as draft mode.
If the input is a PDF, load the pdf skill first for extraction and OCR guidance.
If the user asks for a richer browser view, use web-artifacts-builder or frontend-design only as a preview layer on top of the Markdown workflow.
If the user wants citation-level grounding to original text, keep the source map explicit and do not lose the page or block IDs.
If the user asks for a model backend, treat the provider as configurable and keep the prompt format provider-neutral.
Use official APIs from the provider the user has available. Prefer OpenAI-compatible chat or responses interfaces when they exist, because that keeps the paper reader portable across vendors.
DeepSeek: official OpenAI-compatible API at https://api.deepseek.comGLM / Zhipu: official OpenAI-compatible API at https://open.bigmodel.cn/api/paas/v4Qwen / DashScope: official OpenAI-compatible API at https://dashscope.aliyuncs.com/compatible-mode/v1Kimi / Moonshot: official OpenAI-compatible API at https://api.moonshot.cn/v1Keep model names provider-specific, but keep the app contract the same: base_url, api_key, model, and chat-completions-style messages.
Good output feels like a paper reader, not a machine translation dump.
It should let a reader: