From awesome-stock-skills
Automates equity research: downloads concalls and presentations from screener.in, uploads to NotebookLM, generates tailored analysis queries by company and sector, outputs professional PDF deep-dive reports.
npx claudepluginhub samyakjain0606/awesome-stock-skillsThis skill uses the workspace's default tool permissions.
Automate end-to-end equity research: screener.in data collection, NotebookLM-powered analysis with
Analyzes 10-K, 10-Q, earnings releases, and proxy statements to extract investment insights from revenue breakdowns, MD&A, risks, and management tone.
Researches SEC filings, earnings calls, analyst reports, and market data. Useful for financial crimes, corporate stories, or market events in projects.
Synthesizes a financial briefing on a public company using earnings transcripts, SEC filings, and financial news. Provides TL;DR, management narrative, Q&A highlights, and forward outlook.
Share bugs, ideas, or general feedback.
Automate end-to-end equity research: screener.in data collection, NotebookLM-powered analysis with dynamically generated queries, and a professional PDF report with variant perception scorecard.
nlm CLI installed and authenticated (nlm login --check should succeed).venv/ with reportlab installedIf nlm login --check fails, tell the user to run nlm login first and stop.
The user provides a stock SYMBOL (e.g., GRAVITA, TCS, RELIANCE). Optionally they can specify:
nlm login --check
If this fails, stop and ask the user to authenticate first.
Use WebFetch on the screener.in consolidated page (fall back to standalone if needed):
URL: https://www.screener.in/company/{SYMBOL}/consolidated/
Prompt: Extract ALL conference call transcript links AND investor presentation links.
For each, return the date (month and year), type (Concall or Investor Presentation),
and the full URL. Only include bseindia.com PDF links.
Return as a markdown table: Date, Type, Link
If consolidated returns no results, retry with:
https://www.screener.in/company/{SYMBOL}/
Sort results by date (newest first). Select the most recent N quarters (default 4).
Create directories and download without asking:
mkdir -p data/companies/{SYMBOL}/concalls data/companies/{SYMBOL}/presentations
Download concalls:
curl -sL -o "data/companies/{SYMBOL}/concalls/{SYMBOL}_concall_{YYYY-MM}.pdf" "{URL}"
Download investor presentations:
curl -sL -o "data/companies/{SYMBOL}/presentations/{SYMBOL}_investor_pres_{YYYY-MM}.pdf" "{URL}"
Verify downloads with ls -lh. If any file is 0 bytes or suspiciously small (<10KB), warn and retry.
nlm notebook create "{COMPANY_NAME} - Equity Analysis"
Save the notebook ID. Then upload each PDF:
nlm source add {NOTEBOOK_ID} --file "path/to/file.pdf"
Upload ALL downloaded files (both concalls and presentations). Wait for each to complete before starting the next.
This step extracts comprehensive financial data from screener.in, converts it to a PDF, and uploads it as an additional NLM source so that all queries can reference actual financial numbers.
Step 5a: Extract screener financial data
Use WebFetch on the screener page to extract ALL financial data:
URL: https://www.screener.in/company/{SYMBOL}/consolidated/
Prompt: Extract ALL financial data from this page in a structured format. Include:
1. Company overview and description
2. All key ratios (PE, PB, ROE, ROCE, Dividend Yield, etc.)
3. Quarterly results table (all quarters visible) - Revenue, Expenses, Operating Profit, OPM%, Net Profit, EPS
4. Profit & Loss annual data (all years visible)
5. Balance Sheet data (all years visible)
6. Cash Flow data (all years visible)
7. Key financial ratios table (all years visible)
8. Shareholding pattern (promoter, FII, DII, public)
9. Peer comparison table if available
10. Pros and Cons listed on the page
Return everything as clean markdown with tables preserved.
If consolidated returns 404, fall back to standalone URL.
Step 5b: Save as markdown
Save the extracted data to data/companies/{SYMBOL}/screener_data.md for reference.
Step 5c: Convert to PDF and upload to NLM
NLM only accepts PDF sources, so convert the screener data to PDF using reportlab:
data/companies/{SYMBOL}/{SYMBOL}_screener_data.pdfnlm source add {NOTEBOOK_ID} --file "data/companies/{SYMBOL}/{SYMBOL}_screener_data.pdf"
This ensures NLM queries can cross-reference concall commentary with actual reported numbers, enabling much richer analysis (e.g., validating management guidance against actual delivery, tracking margin trends, balance sheet changes, shareholding shifts).
This step is what makes queries dynamic and tailored. Gather context from TWO sources:
Source A: Screener page context (already extracted in Step 5a — reuse it, don't re-fetch)
Source B: NLM notebook describe
nlm notebook describe {NOTEBOOK_ID}
This returns an AI-generated summary of the uploaded sources, including suggested topics.
Using the company context from Step 6, generate 6 analysis queries. These are NOT hardcoded templates - they must be tailored to the specific company's business model, sector, and what's actually discussed in the sources.
The 6 query categories are:
Business Model & Value Chain - How does THIS specific company make money? What are the unit economics? Adapt to the actual business (e.g., processing spread for a recycler, subscription metrics for SaaS, same-store growth for retail, NIM for a bank).
Industry Structure & Competitive Positioning - What's the industry structure for THIS sector? Who are the real competitors? What moats exist? Adapt terminology to the sector (e.g., "organized vs unorganized" for Indian manufacturing, "TAM penetration" for tech, "branch network" for banking).
Management Quality & Execution - Track guidance vs delivery across the quarters available. How has tone changed? What strategic bets are being made? Are they delivering on promises?
Financial Deep Dive - Quarter-by-quarter P&L, margins, balance sheet, return ratios. Focus on the metrics that matter most for THIS type of business (e.g., EBITDA/ton for commodity processors, ARPU for telecom, book value for banks, SSG for retail).
Growth Triggers & Variant Perception - SOIC-style analysis. Scan for all standard growth drivers (new products, geographic expansion, market share gains, capex, regulatory tailwinds, etc.). Identify where market consensus may be wrong. Build a VP scorecard with kill-switches.
Risks & Bull/Base/Bear Scenarios - What could go wrong? Build three FY+2/3 scenarios using management's own guidance as the base case, then stress-test up and down with specific numbers.
Query generation guidelines:
Run each query sequentially using nlm notebook query:
nlm notebook query "{NOTEBOOK_ID}" "{QUERY}" -t 180
For the first query, capture the conversation_id from the JSON response. Use it for subsequent
queries with -c {CONVERSATION_ID} to maintain context continuity.
Important: Parse the JSON output from each query and extract the answer field. Store all 6
answers for the report generation step.
If a query fails or times out, retry once. If it fails again, note the gap and continue with remaining queries.
Use the reportlab-based PDF generator. The report generation script should be written dynamically based on the actual NLM query responses, but follow this structure:
Report Sections:
PDF Design Guidelines:
Reference the PDF generation script at scripts/generators/equity_report_pdf.py if it exists.
Otherwise, write a new one following the reportlab patterns in the reference below and save it to
data/companies/{SYMBOL}/analysis/generate_report.py.
Output location:
data/companies/{SYMBOL}/analysis/{SYMBOL}_Equity_Analysis_{YYYY-MM-DD}.pdf
open "data/companies/{SYMBOL}/analysis/{SYMBOL}_Equity_Analysis_{YYYY-MM-DD}.pdf"
Tell the user the report is ready and summarize what's included (number of sections, key findings).
Use reportlab.platypus with SimpleDocTemplate. Key imports and patterns:
from reportlab.lib.pagesizes import A4
from reportlab.lib.units import inch
from reportlab.lib.colors import HexColor
from reportlab.lib.styles import ParagraphStyle, getSampleStyleSheet
from reportlab.platypus import (
SimpleDocTemplate, Paragraph, Spacer, Table, TableStyle,
PageBreak, HRFlowable
)
from reportlab.platypus.flowables import Flowable
# Custom flowables for section headers, info boxes, etc.
# Use onFirstPage for cover, onLaterPages for header/footer
# Build story list and call doc.build(story, onFirstPage=..., onLaterPages=...)
See data/companies/GRAVITA/analysis/generate_report.py for a complete working example of the
PDF generation pattern including custom flowables, color palette, KPI boxes, styled tables,
quote formatting, and scenario boxes.
nlm login --check # Verify auth
nlm notebook create "Name" # Create notebook, returns ID
nlm notebook describe {ID} # AI summary of sources
nlm source add {ID} --file "path.pdf" # Upload PDF source
nlm notebook query {ID} "question" -t 180 # Query (180s timeout)
nlm notebook query {ID} "q" -c {CONV_ID} -t 180 # Follow-up query
nlm login and retry-t 240. If still fails, skip and note the gap in the reportUser: "Run the stock research pipeline on GRAVITA"
User: "Deep dive on TCS"
User: "Equity report for RELIANCE"
User: "/stock-research-pipeline HDFCBANK"