Use when user wants to archive, dump, or back up an entire Telegram channel or chat history to NDJSON with all media files downloaded. Full history extraction with resume support.
From tlgnpx claudepluginhub terrylica/cc-skills --plugin tlgThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Generates FastAPI project templates with async routes, dependency injection, Pydantic schemas, repository patterns, middleware, and config for PostgreSQL/MongoDB backends.
Archive a complete Telegram channel/group/chat to NDJSON + downloaded media files.
Self-Evolving Skill: This skill improves through use. If instructions are wrong, parameters drifted, or a workaround was needed — fix this file immediately, don't defer. Only update for real, reproducible issues.
~/.local/share/telethon/<profile>.session
/tlg:setup first/usr/bin/env bash << 'EOF'
SCRIPT="${CLAUDE_PLUGIN_ROOT:-$HOME/.claude/plugins/marketplaces/cc-skills/plugins/tlg}/scripts/tg-cli.py"
# Full dump: NDJSON + all media (photos, videos, documents)
uv run --python 3.13 "$SCRIPT" dump @ChannelName ./output/ChannelName
# NDJSON only (skip media downloads — much faster)
uv run --python 3.13 "$SCRIPT" dump @ChannelName ./output/ChannelName --no-media
# Dump by numeric chat ID
uv run --python 3.13 "$SCRIPT" dump -1001234567890 ./output/MyChannel
# Use a different profile
uv run --python 3.13 "$SCRIPT" -p missterryli dump @ChannelName ./output/ChannelName
EOF
| Parameter | Type | Description |
|---|---|---|
| chat | string/int | Channel username (@name) or numeric chat ID |
| output | path | Output directory (messages.ndjson + media/ created inside) |
--no-media | flag | Skip media downloads, produce NDJSON only |
output/ChannelName/
├── messages.ndjson ← one JSON object per line, chronological (oldest first)
└── media/
├── 6.jpg ← named by message ID for cross-referencing
├── 12.png
├── 45.mp4
└── ...
Each line is a JSON object with these fields:
| Field | Type | Description |
|---|---|---|
id | int | Telegram message ID |
date | string | ISO 8601 timestamp with timezone |
text | string/null | Full message text (no truncation) |
has_media | bool | Whether message contains media |
media_type | string/null | Telethon class name (MessageMediaPhoto, etc.) |
media_file | string/null | Filename in media/ dir (e.g., "6.jpg") |
views | int/null | View count (channels only) |
forwards | int/null | Forward count |
reply_to_msg_id | int/null | Parent message ID if reply |
grouped_id | int/null | Album group ID (shared across album messages) |
edit_date | string/null | ISO 8601 timestamp of last edit |
sender.id | int | Sender's Telegram user/channel ID |
sender.name | string | Display name (channel title or user first name) |
sender.username | string/null | @username if set |
Re-running the same command skips already-downloaded media files (checks dest.exists()). The NDJSON is fully rewritten each run. This makes it safe to resume interrupted downloads.
# jq: find all GOLD BUY signals with chart screenshots
jq 'select(.text != null and (.text | test("GOLD.*BUY")) and .media_file != null)' messages.ndjson
# DuckDB: aggregate by date
duckdb -c "SELECT date::DATE as day, count(*) FROM read_ndjson('messages.ndjson') GROUP BY day ORDER BY day"
# Python/Polars
import polars as pl
df = pl.read_ndjson("messages.ndjson")
Server closed the connection) — Telethon auto-reconnectsFor git-tracked projects, gitignore the media folder:
# data/telegram/.gitignore
*/media/
This keeps the NDJSON (metadata) in version control while keeping large media files local-only.
After this skill completes, check before closing:
Only update if the issue is real and reproducible — not speculative.