From claude-thesis-writer
Synthesises an authorship log entry from session checkpoints and conversation context. Presents draft for author approval before appending to the project's authorship_log.md.
npx claudepluginhub ccam80/thesis-writer --plugin claude-thesis-writerThis skill is limited to using the following tools:
This skill produces an auditable record of authorship for AI-assisted thesis writing sessions. It synthesises checkpoint notes (written silently by content-creating skills during the session) and any remaining conversation context into a structured log entry, then presents it for author review and approval before appending to the project's `authorship_log.md`.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
This skill produces an auditable record of authorship for AI-assisted thesis writing sessions. It synthesises checkpoint notes (written silently by content-creating skills during the session) and any remaining conversation context into a structured log entry, then presents it for author review and approval before appending to the project's authorship_log.md.
The log serves as a defensible paper trail demonstrating the author's intellectual direction of the work — not a mechanical transcript, but a record of decisions, rejections, and domain contributions.
/log-sessionauthorship_log_draft.md in the thesis project root, written incrementally by document-planner and writer during the sessionauthorship_log.md in the thesis project root (to read cumulative summary)authorship_log_draft.md if it exists — these are the mid-session checkpoints captured while context was freshauthorship_log.md cumulative summary (if it exists) to update running totalsCheckpoints from document-planner contain structured provenance tables. Extract and aggregate these.
For each checkpoint with a Provenance Summary table, extract:
Aggregate across all checkpoints to produce session totals.
From the aggregated data, compute:
| Metric | Formula |
|---|---|
| AI survival rate | (surviving verbatim) / (initial AI points) |
| User content ratio | (user-dictated + user-directed) / (final points) |
| Agent acceptance rate | (agent-suggested accepted) / (agent-suggested total) |
| Figure attribution | user-suggested / total figures |
From checkpoint qualitative notes and conversation context, identify:
Author direction — instances where the author:
Agent contributions — instances where the agent:
Iteration indicators:
Produce a structured entry in this format:
## Session [DATE] — [Scope Description]
**Exchanges**: ~[N] | **Skills used**: [list]
**Checkpoints captured**: [N]
### Scope
[1-2 sentences: what was worked on this session]
### Content Provenance
| Metric | Value |
|--------|-------|
| Initial AI generation | [N] points in [M] paragraphs |
| Final approved | [N] points in [M] paragraphs |
| Surviving verbatim from AI | [N] ([X]%) |
| User-dictated content | [N] points ([X]%) |
| User-directed content | [N] points ([X]%) |
| Agent-suggested, accepted | [N] points ([X]%) |
| Agent-suggested, rejected | [N] points |
| Figures — user | [N] |
| Figures — agent | [N] |
**Summary**: [1-2 sentence plain-language interpretation, e.g., "The author extensively restructured and expanded the initial AI proposal. Of 120 final points, 108 were user-contributed; all 12 figures were user-suggested."]
### Author Direction
- [Concrete decisions, rejections, and domain contributions — 3-8 bullet points]
- [Each bullet should be specific enough to demonstrate intellectual control]
- [Include section/paragraph references where possible]
### Agent Contributions
- [What the agent provided — structural organisation, reference suggestions, prose drafting]
- [Be honest about agent-originated content that was accepted]
### Iteration & Negotiation
- [Sections that required significant back-and-forth]
- [Key points of disagreement and how they were resolved]
### Files Modified
- [List of files written or edited during the session]
Present the draft entry as a complete block. The author will:
Handle corrections conversationally — update the draft and re-present until approved.
Do NOT:
Once approved:
authorship_log.md in the thesis project rootauthorship_log_draft.md (the scratch file is consumed)The top of authorship_log.md contains a running summary updated each session:
# Authorship Log
## Cumulative Summary
- **Sessions logged**: [N]
- **Chapters/sections covered**: [list]
- **Total exchanges**: ~[N]
- **Tool**: Claude Opus [version], thesis-writer plugin v[version]
- **Process**: All content planned collaboratively via document-planner,
prose drafted via writer skill from approved plans. All citations from
author's Zotero library. Author reviewed and approved all output.
### Cumulative Provenance (planning sessions only)
| Metric | Total |
|--------|-------|
| Points planned | [N] |
| User-contributed (dictated + directed) | [N] ([X]%) |
| Agent-contributed (accepted proposals) | [N] ([X]%) |
| Figures — user-suggested | [N] |
| Figures — agent-suggested | [N] |
---
[Session entries in reverse chronological order]
The log must be accurate, not flattering. The quantitative provenance data provides an objective foundation — report the numbers as computed, not as the agent wishes they were.
Specific honesty requirements:
The value of this log is its credibility — an honest record protects the author far better than a sanitised one. A log showing "Author extensively restructured initial AI proposal, contributed 90% of final content" is far more defensible than vague claims of "collaborative development."