Plugins listed here are tagged for this topic and auto-indexed from public GitHub repositories.
Plugins for ETL pipelines, data transformations, warehouse operations, and analytics engineering.
dbt, Airflow, Spark, pandas, and warehouse connectors (BigQuery, Snowflake, Redshift). Some include MCP servers for direct query execution.
Several generate SQL queries, dbt models, or transformation scripts. Some analyze existing queries and suggest optimization patterns.
Some generate migration scripts or track schema drift. Check the data category for additional database-focused tooling.
Automate creation, editing, formatting, extraction, and manipulation of Excel spreadsheets, Word documents, PowerPoint presentations, and PDFs. Build professional spreadsheets with financial standards and zero formula errors, analyze doc content via XML and conversions, generate slide decks and thumbnails, process PDFs with OCR, merging, encryption, and form handling.
Build production-ready data engineering stacks: Airflow DAGs for orchestration, dbt models for transformations, scalable pipelines with Spark on cloud warehouses like BigQuery and Snowflake, Kafka streaming, optimized embeddings for RAG, and vector databases like Pinecone, Weaviate, and pgvector.
Delegate complex data engineering, ML, and AI workflows to specialized sub-agents that design scalable pipelines, build and optimize models, architect LLM systems, tune databases for performance, and deploy production infrastructure across clouds.
Deploy specialized research subagents to analyze markets, benchmark competitors, forecast trends, validate project ideas, collect and clean data from web/files/APIs, review scientific literature, and generate actionable insights and strategies.
Empower Claude Code to handle business analyst workflows: design KPI frameworks and dashboards for sales/marketing/product, build 3-5 year startup financial models with cohort revenue and scenario analysis, calculate TAM/SAM/SOM market sizes, and optimize seed-to-Series A metrics using Python, SQL, Snowflake, and BigQuery.
Query Neon Postgres databases and manage Linear issues directly in Claude, while leveraging agents to analyze business KPIs like MRR, CAC, churn via SQL and craft product strategies with market research and roadmaps.
Automate office document workflows by creating, editing, analyzing DOCX/PPTX/PDF/XLSX files, processing Google Sheets/Slides via OAuth-enabled Python CLI, extracting text/tables to Markdown/CSV/JSON/Pandas, converting formats, and enforcing Excel standards for reports.
Build production LLM applications using expert strategies for context window management via summarization trimming routing and caching, RAG pipelines with chunking embeddings vector stores and agents, observability via Langfuse tracing evaluations, and retrieval optimization workflows.
Design scalable database architectures, build interactive D3.js visualizations in React/Vue/Svelte, set up A/B tests with metrics validation, audit analytics tracking for data quality, apply Postgres best practices, and optimize complex SQL queries across cloud databases like Snowflake and BigQuery.
Automate end-to-end HR workflows: plan headcount and org structures, track recruiting pipelines, generate onboarding plans and offer letters, run performance reviews with templates, benchmark compensation, analyze reports from CSV data, lookup policies, and integrate with Jira, Slack, Notion, Google Calendar, Microsoft 365, and Gmail.
Generate optimized SQL queries from natural language for databases like PostgreSQL, BigQuery, Snowflake; profile tables and datasets; perform statistical analysis and outlier detection; create publication-quality Python charts with matplotlib/seaborn/plotly and interactive HTML dashboards; QA analyses for accuracy; connect to Amplitude, Hex, Atlassian, and company knowledge bases to query events, notebooks, and docs.
Automate end-to-end Hugging Face ML workflows: train and fine-tune language/vision models on Jobs GPUs with TRL/Unsloth/PyTorch, build Gradio demos, run JS/TS inference, manage repos/datasets via CLI, query leaderboards, perform local evals, explore datasets, launch GGUF servers, and publish papers.
Automate finance and accounting workflows by generating journal entries, reconciliations, financial statements, variance analyses, SOX compliance docs, and month-end checklists. Query BigQuery datasets, access company knowledge bases, and integrate with Gmail, Slack, Microsoft 365, and Google Calendar for seamless data retrieval and collaboration.
Automate sales workflows: analyze CRM pipelines for risks, stale deals, and action plans; research prospects, companies, competitors for intel and battlecards; generate personalized cold emails, LinkedIn messages, call prep briefs, forecasts, decks, one-pagers; process transcripts for follow-ups. Integrates with HubSpot, Outreach, Apollo, ZoomInfo, Close, Slack, Notion, Gmail, calendars.
Generate SEO-optimized marketing content for blogs, emails, social media, and landing pages; plan drip campaigns with calendars, A/B tests, and objectives; audit websites, track competitors, ensure brand voice; create performance reports; connect remotely to tools like Ahrefs, HubSpot, Amplitude, Klaviyo, Canva for SEO data, analytics, CRM, and design.
Build and debug Redpanda Connect streaming pipelines by generating YAML configurations and Bloblang transformation scripts from natural language descriptions. Validate configs interactively, test scripts with sample inputs, and discover components like inputs, outputs, and processors for Kafka-to-S3 or database workflows.
Generate SQL queries from natural language descriptions using your database schema for PostgreSQL, MySQL, or BigQuery. Analyze CSV or Excel user data to produce cohort retention heatmaps, engagement trends, churn insights, and research recommendations. Evaluate A/B tests for statistical significance, confidence intervals, lift, and ship/extend/stop decisions with Python-powered reports.
Perform product market research workflows: generate user personas, behavioral segments, and customer journey maps from surveys, CSVs, or feedback; conduct competitive landscape analysis with competitor profiles and differentiation maps; run sentiment analysis on reviews for insights and recommendations; estimate TAM/SAM/SOM with growth projections; output markdown reports.
Connect Claude to biomedical databases like PubMed, bioRxiv, ChEMBL, and Open Targets for literature searches and target discovery. Run single-cell RNA-seq QC with scanpy, scvi-tools workflows for integration and multiomics, nf-core pipelines for sequencing analysis, and convert lab files to structured JSON/CSV for preclinical R&D acceleration.
Query SQL databases like PostgreSQL and SQLite, plus tabular files (CSV, TSV, JSON, Excel) using SLQ or native SQL via sq CLI. Manage sources and handles, inspect schemas, diff data tables, and format outputs for CLI pipelines and terminal workflows.
Develop, debug, deploy, and actorize JavaScript/TypeScript/Python projects as Apify Actors for web scraping, data extraction, and automation. Generate output schemas from source code and run CLI scrapers for 20+ platforms including Instagram, Facebook, TikTok, YouTube, LinkedIn.
Generate plots, charts, and graphs from data via natural language requests—AI analyzes datasets, selects optimal visualization types, produces validated Python code, delivers performance metrics and insights, saves artifacts, and creates documentation.
Automate training and optimization of ML models for classification and regression on datasets: analyze data, select/configure algorithms, cross-validate, evaluate metrics, generate Python code using scikit-learn/PyTorch/TensorFlow/XGBoost, and save artifacts.
Automate long-form webnovel creation: initialize projects interactively with genre/characters/worldbuilding/outlines, generate beat sheets/chapters (2000+ words), extract entities/relationships to SQLite indexes, visualize status/entity graphs in read-only dashboard, recover interrupted workflows, and validate chapters via agents for inconsistencies, pacing, OOC, reader pull, and quality reports.
Automate archiving historical PostgreSQL/MySQL records to archive tables or cloud storage (S3, Azure Blob, GCS) using age/status-based rules, retention policies, compression, and compliance tracking to shrink primary database size and manage cold data.
Build, debug, optimize, secure, and deploy FireCrawl web scraping pipelines for LLM/RAG data ingestion: scrape/crawl sites to markdown/JSON, extract structured data, handle rate limits/errors, add monitoring/observability, scale with backoff/caching, and integrate into Node/Python apps from dev to production.
Automate machine learning feature engineering by generating and executing validated Python code to create interactions, scale data, encode categoricals, select features via importance analysis, compute metrics, save artifacts, and generate documentation.
Automate full Databricks lakehouse lifecycle: build Delta Lake ETL pipelines with medallion architecture and Auto Loader, engineer ML workflows via MLflow and Feature Store, deploy jobs/pipelines with Asset Bundles and GitHub Actions CI/CD, secure via Unity Catalog RBAC, optimize costs/performance, troubleshoot errors, and monitor with system tables.
Generate and execute automated Python pipelines for data cleaning, transformation, validation, and ETL in ML workflows. Analyze context to produce AI/ML code with built-in validation, error handling, performance metrics, saved artifacts, and documentation.
Forecast future values from historical time series data using ARIMA and Prophet models, including trend, seasonality, and autocorrelation analysis with confidence intervals. Generate validated AI/ML code for forecasting tasks complete with error handling, performance metrics, insights, artifacts, and documentation.
Analyze cryptocurrency market sentiment by pulling data from social media, news, on-chain metrics, derivatives, whale activity, and Fear & Greed Index to generate 0-100 mood scores, weighted insights, and predictions for overall market or specific coins like BTC.
Scan cryptocurrencies, stocks, and forex markets for top gainers, losers, volume spikes, and unusual activity. Customize by market, timeframe, category, limits, filters, and sorting. For crypto, rank 1000+ assets by composite score of price change, volume ratio, and market cap to track pumps and trends.
Query EVM blockchain data on Ethereum, Polygon, and Arbitrum from the command line using Etherscan APIs to fetch transactions, address balances, token histories, blocks, and smart contract details, then generate structured markdown reports with holdings, histories, and insights.
Analyze and monitor on-chain blockchain metrics across chains and DeFi protocols, tracking whale movements, holder distributions, network health, TVL, fees, DEX volumes, yields, and trends. Generate analytics reports via DeFiLlama API using Python CLI tools.
Generate BUY/SELL trading signals for cryptocurrencies and stocks using technical indicators like RSI, MACD, and Bollinger Bands. Scan and rank watchlist opportunities with confidence scores, stop-loss/take-profit levels, multi-timeframe analysis, and markdown reports including risk guidance.
Track institutional options flow and detect smart money movements or unusual activity in BTC/ETH markets on Deribit, OKX, and Bybit via API queries, analyzing positioning and sentiment for any symbol or market-wide with customizable params like timeframe and min-premium.
Design and implement partitioning strategies for PostgreSQL and MySQL tables using range, list, hash, and composite methods to handle massive datasets. Automate schema design, maintenance routines, query optimization, and data retention policies for improved performance.
Run institutional-grade stock analysis on A-share, HK, and US equities with 22 financial dimensions, simulated investor panels, DCF/LBO valuation models, and pig-butchering scam detection, all output as Bloomberg-style HTML reports.
Build production Clay SaaS integrations for lead enrichment: configure tables and webhooks, deploy receivers to Vercel or Docker, scale pipelines with Redis queues, secure with RBAC and PII guards, optimize costs and performance, troubleshoot failures, and monitor metrics using 30 dedicated Claude Code skills.
Calculate cryptocurrency capital gains taxes from exchange CSV transaction data using FIFO, LIFO, or HIFO methods. Identify taxable events across trading, DeFi, NFTs, and mining. Generate compliant tax reports and forms like Form 8949 for US, UK, and EU jurisdictions.
Design and optimize NoSQL data models for MongoDB, DynamoDB, Redis, and Cassandra by analyzing access patterns, embedding vs referencing, denormalization trade-offs, sharding keys, and indexes.
Generate professional Excel financial models including DCF valuations with FCF projections, WACC, sensitivities; LBO analyses with debt schedules, IRR/MOIC; budget vs actual variance reports with waterfalls; and dynamic pivot tables via natural language prompts and auto-invoked skills.
Build and validate linear and polynomial regression models on datasets to predict outcomes, uncover relationships, and report metrics like R-squared and RMSE. Generates validated code with error handling, delivers insights, saves artifacts, and creates documentation.
Build end-to-end AutoML pipelines in Python, automating data checks, feature engineering, model selection, hyperparameter tuning, evaluation, and deployment artifacts for repeatable ML workflows. Generate validated ML code with error handling, performance metrics, and documentation from context analysis.
Analyze DeFi liquidity pools on Uniswap V2/V3, Curve, Balancer, and other DEXes across multiple chains to calculate impermanent loss, APY, TVL, volume, fees, risks, LP profitability, and optimization opportunities using Python scripts.
Build secure Rust applications integrating Azure services: authenticate with Entra ID, manage Key Vault secrets/keys/certificates, perform CRUD on Cosmos DB documents and Blob Storage, and stream data via Event Hubs using official SDK patterns and code examples.
Design optimal ClickHouse schemas with MergeTree engines, ingest data at scale via Node.js/Python clients, run analytical queries, optimize performance and costs, secure deployments with RBAC and quotas, integrate CI/CD testing and monitoring, troubleshoot errors/incidents, and manage migrations/upgrades for production analytics workloads.
Wrangling tabular data in CSV, TSV, Excel, JSONL, Parquet using qsv's 51 fast CLI commands via skills, agents, and MCP server. Profile stats, run SQL queries on Polars engine, join datasets, clean/validate/normalize, convert formats, generate charts/ontologies/reports, access BLS economic data, log reproducible sessions.
Split CSV datasets into stratified training, validation, and test sets using custom ratios for ML workflows, generating production-ready Python code with validation, error handling, performance metrics, artifact saving, and automatic documentation.
Detect anomalies and outliers in datasets using ML algorithms like Isolation Forest, One-Class SVM, LOF, and autoencoders to identify unusual patterns. Generate Python code for custom anomaly detection tasks, including validation, error handling, performance metrics, insights, saved artifacts, and documentation.
Optimize DeFi yield farming strategies across Ethereum, BSC, and Polygon by aggregating DeFiLlama APY data, assessing risks via TVL and audits, and generating portfolio allocations tailored to your capital, risk tolerance, duration, and preferences.
Build production Navan API integrations for travel bookings, expense management, and data syncing to ERPs/warehouses: automate OAuth auth setup, REST workflows, error debugging, CI/CD deployment, monitoring, security hardening, and webhook handling.
Build, debug, integrate, observe, and scale Python data pipelines using Apache Hamilton. Turn natural language into DAGs via DOT graphs and TDD, add LLM/RAG workflows, connect to Airflow/FastAPI/Streamlit, visualize executions, and optimize with async/Spark for production.
Conduct investment research using OpenBB Platform: analyze equities, crypto prices, macro indicators, options chains, and portfolios with commands for metrics, technicals, and optimization; generate AI reports; query expert agents for theses, valuations, and strategies.
Run a local ActionBook MCP server to give AI agents like Claude real-time browser automation: launch sessions, navigate sites, fill forms, click elements, handle multi-tabs, extract structured data to JSON/CSV with Playwright scripts, generate HTML research reports, and retrieve verified CSS/XPath selectors for any website.
Build, deploy, debug, and scale Apify Actors for web scraping: scaffold Crawlee crawlers with input schemas and routers, manage datasets/queues/APIs programmatically, set up local dev/CI-CD pipelines with GitHub Actions, diagnose errors/timeouts/proxies, tune performance/costs, secure tokens, and integrate runs into Node.js apps.
Build and manage Snowflake data platforms: connect via Node.js/Python SDKs, ingest data from S3/GCS/Azure stages/Snowpipe, construct ELT pipelines with streams/tasks/dynamic tables, tune query performance/costs/clustering, enforce RBAC/security policies/governance, integrate CI/CD with GitHub Actions/Terraform, set up multi-env/observability, troubleshoot errors/incidents.
Build, deploy, optimize, secure, and troubleshoot Python pipelines exporting Clari revenue forecasts, quotas, CRM data, and adjustments to Snowflake, BigQuery, or PostgreSQL. Includes CI/CD integration, API debugging, cost/performance tuning, local mocks, schema migrations, rate limit handling, and production checklists.
Accelerate Palantir Foundry integrations by generating Python SDK code for building ETL data pipelines, managing Ontology objects, handling API errors and rate limits, configuring RBAC and security, setting up CI/CD with GitHub Actions, deploying to GCP/AWS/Docker, and implementing monitoring, observability, and cost optimization.
Orchestrate Hex data projects via API in external pipelines: trigger parameterized runs, poll status, integrate GitHub Actions CI/CD for deploys/refreshes, optimize costs/performance/rate limits, debug errors, secure auth, deploy to Vercel/Fly.io/Cloud Run, and migrate SDKs.
Delegate AI agents to analyze business metrics and KPIs, build revenue models and dashboards, draft GDPR-compliant legal documents, integrate Stripe payments with webhooks, and develop quantitative financial models for trading strategies and portfolio optimization.
Delegate complex AI and data tasks to specialized agents that proactively build LLM applications with RAG and orchestration, design scalable ETL pipelines and warehouses, deploy MLOps workflows, optimize prompts, analyze datasets, manage context, and decompose goals into actionable hierarchies.
Build production web scraping pipelines with Bright Data: authenticate proxies/APIs, scrape JS/SPA sites via Scraping Browser/Playwright/Puppeteer/SERP, debug errors/rate limits, tune costs/performance/caching, deploy to Vercel/GCP/Fly.io, set up CI/CD/tests/webhooks, and monitor usage in Node.js/Python projects.
Act as an AI partner for Product Managers in Claude Code: prepare meeting agendas and role-plays, draft PRDs and briefs, generate stakeholder updates and weekly plans, audit codebases, conduct competitive analyses, prototype tools and dashboards, query databases for insights, critique ideas, design experiments, review metrics, with security hooks.
Execute 58 bioinformatics AI agent skills on genetic files (VCF, FASTQ, h5ad, proteomics) for pharmacogenomics, ancestry inference, scRNA-seq analysis, metagenomics profiling, variant annotation, GWAS, and clinical reporting. Generate reproducible markdown reports, plots, CSV/JSON outputs, and bundled environments via deterministic local Python runs with privacy safeguards.
Build sophisticated AI agents using LangChain, LangGraph, and Deep Agents: configure memory and filesystem backends, orchestrate subagents for task delegation, add human-in-the-loop approvals via middleware, construct RAG pipelines, and manage persistent state graphs with checkpointers.
Automate processing of PDF, DOCX, PPTX, and XLSX files in Anthropic Claude workflows: extract text, tables, images, and metadata; edit content, structure, and tracked changes; generate documents and presentations from templates; clean, format spreadsheets with formulas, charts, and financial standards.
Develop full-stack Databricks solutions: create Spark/Lakeflow pipelines, MLflow models and agents, Vector Search indexes, AI/BI dashboards, Genie Spaces, Unity Catalog metrics, Lakebase PostgreSQL, jobs/workflows, apps with Streamlit/FastAPI, and deploy via bundles/CI-CD using Python SDK skills and direct MCP workspace access.
Automate Chinese research paper workflows: brainstorm topics and structures interactively, synthesize literature from PubMed/arXiv with BibTeX, generate Python data visualizations/stats code using matplotlib/seaborn/scipy, draft evidence-driven chapters, peer-review with checklists, polish bilingual text, and compile publication-ready LaTeX with journal templates.
Agentically audit, optimize, and manage Power BI semantic models in Microsoft Fabric: trace dependencies across workspaces for impact analysis, review quality and performance against best practices, standardize TMDL naming conventions, author and validate Power Query M expressions, and orchestrate full/incremental refreshes via REST APIs and CLI.
Review ClickHouse schemas, queries, configurations, inserts, and agent connections against 31 prioritized best practices across schema design, query optimization, and data ingestion. Get architecture advice for observability, analytics, and IoT workloads with pattern selection, decision rules, and cited provenance.
Manage Microsoft Fabric and Power BI services via Fabric CLI: navigate workspaces, handle lakehouses and OneLake files, query data with DuckDB and notebooks, automate jobs and deployments, migrate trial workspaces to production capacity, audit project configs, and access Microsoft Learn documentation.
Load, query, and analyze data from files (CSV, Parquet, JSON, Excel, Avro, spatial), S3-compatible storage, or attached DuckDB databases using SQL in Claude Code sessions. Preview schemas/samples without full downloads, convert formats, perform spatial analysis (distances, joins), search docs/session logs, install extensions.
Empower AI agents to automate dbt workflows: build and test models, execute CLI commands, migrate projects to Fusion or new platforms, diagnose Cloud job failures, configure Semantic Layer for business queries, generate Mermaid lineage diagrams, and manage MCP servers.
Generate vector embeddings from text data using OpenAI, Cohere, or local models, store them in a vector database with indexing, and perform semantic similarity searches to retrieve top-K matches with scores, metadata, re-ranking, and deduplication.
Detect PII in codebases and data stores, generate severity-ranked risk inventories with remediation recommendations, and anonymize datasets via pseudonymization techniques, outputting formatted reports of processed fields and applied methods.
Automate creation, editing, analysis, and visual review of Office documents: build Excel spreadsheets with formulas/charts, edit Word docs and PowerPoint slides preserving layout, generate/extract from PDFs, using rendered previews for validation.
Develop, backtest, and analyze quantitative trading strategies for global stock markets (TW, US, KR, JP, HK) using the FinLab Python package, handling data access, FinLabDataFrame operations, factor research, and US market specifics.
Build and test dbt models using SQL transformations, ref/source, and YAML unit tests; configure semantic layers for metrics, dimensions, and KPI queries; troubleshoot Cloud jobs with logs, API, and git; implement Mesh governance for contracts and cross-project refs; access docs; format CLI commands; generate MCP configs for VS Code integration.
Migrate dbt projects from Core to Fusion engine or across data platforms like Snowflake to Databricks by triaging errors as auto-fixable or guided, adapting SQL dialects, and validating fixes with dbt debug, compile, and unit tests.
Automate end-to-end SageMaker AI/ML workflows: fine-tune LLMs serverlessly with SFT/DPO/RLVR via Jupyter notebooks, validate/transform datasets, select Hub models, evaluate with LLM-as-a-Judge, deploy LoRA models to endpoints/Bedrock, troubleshoot HyperPod clusters with diagnostics/SSM, and orchestrate via structured plans.
Generate Mermaid diagrams visualizing dbt model lineage and dependencies as color-coded DAGs in markdown with legends. Input manifest.json, use MCP tools, or parse code directly to quickly diagram data pipelines and model relationships for documentation and analysis.
Delegate growth hacking tasks to an AI subagent that automates user acquisition, designs viral loops, runs A/B experiments, optimizes marketing channels, analyzes conversion funnels, builds metrics dashboards, and engineers scalable data-driven growth strategies.
Delegate complex data analysis tasks to an expert agent that crafts optimized SQL queries, executes BigQuery CLI operations via Bash, handles aggregations, joins, and delivers summaries with actionable insights.
Automatically validate data contracts in edited files by running datacontract_lint.sh after Edit, Write, or MultiEdit tool uses, ensuring data pipeline quality and catching issues during development workflows without manual intervention.
Implement GDPR-compliant data privacy engineering in B2B applications, automating data subject rights fulfillment, granular consent management, multi-region data residency enforcement, privacy-preserving analytics, and privacy-by-design integration.
Analyze and optimize Azure cloud costs with FinOps Toolkit: generate optimization reports from Advisor and KQL queries, perform month-over-month analysis with anomaly detection and forecasting, deploy and manage FinOps Hubs, query Data Explorer databases, and consult specialized agents for strategy and best practices.
Streamline Airflow data engineering workflows using Astro CLI: initialize and manage local/production environments, author/debug/deploy DAGs, profile warehouse schemas with lineage tracing, integrate dbt Cosmos, query tables, and migrate to Airflow 3.x.
Onboard to Bright Data and scrape webpages as markdown/JSON with CAPTCHA bypass, search Google/Bing SERPs for structured results, extract data from 40+ sites like Amazon/LinkedIn/Instagram/TikTok via CLI/Python SDK/APIs, build custom scrapers, debug browser sessions, and gather competitive intelligence on pricing/reviews/hiring.
Build and operate AWS data lakes: create managed Iceberg tables on S3 with Glue integration, import data from S3/JDBC/Redshift/Snowflake/BigQuery/DynamoDB, execute and manage Athena SQL queries, store/query vectors, resolve assets across catalogs, audit Glue inventories, and troubleshoot connections.
Follow expert Dagster conventions and dg CLI guidance to create projects, define assets jobs schedules sensors, debug issues, and query pipeline components in Python data engineering workflows.
Build data processing pipelines, operator graphs, ABAP/S4HANA integrations, replication flows, and ML scenarios in SAP HANA Cloud Data Intelligence. Assemble workflows using Gen1/Gen2 operators and subengines in Python, Node.js, or C++, with JupyterLab for ML and Data Transformation Language functions.
Profile PySpark DataFrames or Unity Catalog tables with AI to generate data quality rule candidates, define rules via Python classes or YAML, validate against DQEngine, run end-to-end checks splitting valid/quarantined rows, and persist rules to Delta tables, volumes, or Lakebase.
Build privacy-first event tracking pipelines in walkerOS: create sources/destinations/transformers with guided templates, validate/simulate/test events, bundle flows via CLI, debug issues, generate docs, and manage projects via API server.
Delegate complex data analysis tasks to an expert agent that generates optimized SQL queries, executes BigQuery CLI (bq) operations via Bash, handles aggregations and joins, and delivers actionable insights on datasets.
Delegate growth hacking tasks to design referral programs, analyze acquisition funnels, run A/B experiments, optimize marketing channels, build metrics dashboards, and engineer scalable user growth strategies.