This skill should be used when the user wants to validate, review, or score their Hedera hackathon submission before submitting. Triggered by phrases like "validate hackathon submission", "review my hackathon project", "hackathon scorecard", "check submission readiness", "score my hackathon", "submission review", "judge my project", "hackathon submission quality", or "improve hackathon score".
From hackathon-helpernpx claudepluginhub hedera-dev/hedera-skills --plugin hackathon-helperThis skill uses the workspace's default tool permissions.
references/judging-criteria.mdProvides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Integrates PayPal payments with express checkout, subscriptions, refunds, and IPN. Includes JS SDK for frontend buttons and Python REST API for backend capture.
Review a Hedera hackathon submission's codebase and score it against the official judging criteria. Focus on what is observable from the code, README, and documentation in the repository.
Scan the codebase to understand what was built:
package.json, Cargo.toml, go.mod, requirements.txt, etc.@hashgraph/sdk, hedera-sdk-*, etc.)0x167)Evaluate the submission against each of the 7 judging criteria. For each, provide a score (1-5), evidence from the codebase, and specific improvement suggestions. Refer to references/judging-criteria.md for the complete rubric with guiding questions and 1/3/5 score descriptors.
Review for:
Review for:
Review for:
Review for:
Rate integration depth:
Review for:
Review for:
Review for:
Output a comprehensive report:
# Hackathon Submission Review
## Project Summary
- **Name**: [from README]
- **Description**: [1-2 sentences]
- **Tech Stack**: [languages, frameworks, Hedera SDK version]
- **Hedera Services Used**: [list]
## Scorecard
| Section | Score | Weight | Weighted | Key Finding |
| ----------- | -------- | ------ | -------- | ------------------ |
| Innovation | X/5 | 10% | X.X | [one-line summary] |
| Feasibility | X/5 | 10% | X.X | [one-line summary] |
| Execution | X/5 | 20% | X.X | [one-line summary] |
| Integration | X/5 | 15% | X.X | [one-line summary] |
| Validation | X/5 | 15% | X.X | [one-line summary] |
| Success | X/5 | 20% | X.X | [one-line summary] |
| Pitch | X/5 | 10% | X.X | [one-line summary] |
| **Total** | **X/35** | | **X.X** | |
**Estimated Final Grade: X%**
## Detailed Findings
### [Section Name] - X/5
**Evidence**: [What was found in the code]
**Gaps**: [What's missing]
**Quick Wins**: [Specific, actionable improvements]
[Repeat for each section]
## Top 5 Improvements (Ranked by Score Impact)
1. [Highest impact improvement] - affects [criteria] ([weight]%)
2. ...
3. ...
4. ...
5. ...
## Hedera Integration Depth Analysis
- Services detected: [list with evidence]
- Ecosystem integrations: [list or "none detected"]
- Creative usage: [describe or "none detected"]
- Recommendation: [specific services or integrations to add]
After the scorecard, provide a prioritized list of improvements ordered by impact on the final weighted score. Focus on:
For each improvement, estimate the effort (quick fix, moderate, significant) and the potential score increase.
Weighted Score = (Score / 5) * (Section Weighting / 100 * 35)
Final Grade = (Sum of Weighted Scores / 35) * 100
references/judging-criteria.md for the complete rubric with detailed 1/3/5 descriptors per criterion.