Project Selection & Go/No-Go Gates
TL;DR: Evaluates project proposals against feasibility criteria to make informed go/no-go decisions. Assesses strategic fit, financial viability, technical feasibility, resource availability, and risk tolerance to determine whether a project should proceed, be modified, or be rejected.
Principio Rector
Decir "no" a un proyecto que no debería existir es tan valioso como decir "sí" al correcto. El costo de un proyecto fallido no es solo su presupuesto — es el costo de oportunidad de los recursos que pudieron haber creado valor en otro lugar. Selection gates protegen a la organización de comprometer recursos en iniciativas inviables.
Assumptions & Limits
- Assumes a project proposal or business case exists with enough detail for evaluation [PLAN]
- Assumes organizational strategy and risk appetite are documented for alignment scoring [SUPUESTO]
- Breaks when selection criteria are undefined — establish criteria before attempting selection
- Does not replace detailed feasibility studies; provides screening-level assessment
- Assumes scoring weights are agreed upon by governance stakeholders [STAKEHOLDER]
- Limited to individual project evaluation; for portfolio-level prioritization use
strategic-alignment
Usage
# Full project selection assessment
/pm:project-selection $ARGUMENTS="--proposal project-proposal.md --criteria weighted"
# Quick go/no-go screening
/pm:project-selection --type screening --proposal brief.md
# Comparative selection across multiple proposals
/pm:project-selection --type comparative --proposals "prop-A.md,prop-B.md,prop-C.md"
Parameters:
| Parameter | Required | Description |
|---|
$ARGUMENTS | Yes | Path to project proposal or business case |
--type | No | full (default), screening (quick), comparative (multi-proposal) |
--criteria | No | weighted (default), threshold, pairwise |
--proposals | No | Comma-separated list for comparative evaluation |
Service Type Routing
{TIPO_PROYECTO}: All types pass through selection gates. Criteria weights vary: strategic projects weight alignment higher; innovation projects weight learning potential; compliance projects require mandatory assessment.
Before Selecting
- Read the project proposal and business case to understand the investment request [PLAN]
- Read organizational strategy or OKRs to validate strategic alignment claims [PLAN]
- Glob
**/resource_plan* to verify resource availability assertions [SCHEDULE]
- Grep for similar completed projects in
**/closure* or **/lessons* to inform feasibility judgment [INFERENCIA]
Entrada (Input Requirements)
- Project proposal or business case
- Portfolio prioritization results
- Resource availability
- Strategic plan and investment themes
- Risk appetite framework
Proceso (Protocol)
- Screening criteria — Apply minimum threshold criteria (strategic fit, sponsor, budget)
- Strategic alignment — Assess fit with organizational strategy and OKRs
- Financial viability — Review business case (NPV, ROI, payback)
- Technical feasibility — Assess technical capability and readiness
- Resource availability — Verify resource capacity for the project
- Risk assessment — Evaluate risk within organizational risk appetite
- Dependency check — Assess cross-project dependencies and timing
- Scoring — Apply weighted scoring model per criterion
- Decision recommendation — Recommend GO, CONDITIONAL GO, NO-GO, or DEFER
- Conditions documentation — If conditional, specify what must be resolved
Edge Cases
- Strategic mandate overrides feasibility — Document the override with [STAKEHOLDER] tag. Proceed but flag all feasibility gaps as risks in the risk register. The selection record must show the mandate, not hide it [STAKEHOLDER].
- All candidates score similarly — Apply sensitivity analysis by varying weights ±10%. If ranking remains stable, decision requires qualitative tiebreaker from governance. If unstable, the scoring model needs refinement [METRIC].
- Proposal lacks data for scoring — Score what is available; mark data-gap dimensions as [SUPUESTO] with confidence=Low. Require data completion before final GO decision [SUPUESTO].
- Project already started without selection gate — Conduct retroactive assessment. If NO-GO, escalate to sponsor with sunk-cost analysis and recommendation to continue, pivot, or terminate [STAKEHOLDER].
Example: Good vs Bad
Good example — Weighted selection assessment:
| Attribute | Value |
|---|
| Criteria | 6 dimensions with agreed weights summing to 100% |
| Scoring | Each dimension scored 1-5 with evidence justification |
| Result | Weighted score 3.8/5.0 — CONDITIONAL GO |
| Conditions | 2 specific conditions with deadlines and owners |
| Sensitivity | Ranking stable across ±10% weight variation |
| Evidence | 80% [PLAN]/[METRIC], 20% [INFERENCIA] |
Bad example — Rubber-stamp approval:
"Approved" with no scoring criteria, no strategic alignment check, no resource verification, and no risk assessment. Selection without criteria is not selection — it is hope. Every project that proceeds without evaluation competes for resources against projects that were properly vetted.
Salida (Deliverables)
- Project selection assessment report
- Scoring matrix with weighted results
- GO / CONDITIONAL GO / NO-GO / DEFER recommendation
- Conditions and prerequisites (if applicable)
- Portfolio impact analysis
Validation Gate
Escalation Triggers
- Strategic mandate overrides feasibility concerns
- Multiple projects competing for same resources
- Conditional go with unresolvable conditions
- Selection criteria inadequate for project type
Additional Resources
| Resource | When to read | Location |
|---|
| Body of Knowledge | Before starting to understand standards and frameworks | references/body-of-knowledge.md |
| State of the Art | When benchmarking against industry trends | references/state-of-the-art.md |
| Knowledge Graph | To understand skill dependencies and data flow | references/knowledge-graph.mmd |
| Use Case Prompts | For specific scenarios and prompt templates | prompts/use-case-prompts.md |
| Metaprompts | To enhance output quality and reduce bias | prompts/metaprompts.md |
| Sample Output | Reference for deliverable format and structure | examples/sample-output.md |
Output Configuration
- Language: Spanish (Latin American, business register)
- Evidence: [PLAN], [SCHEDULE], [METRIC], [INFERENCIA], [SUPUESTO], [STAKEHOLDER]
- Branding: #2563EB royal blue, #F59E0B amber (NEVER green), #0F172A dark
Sub-Agents
Comparative Ranker
Comparative Ranker Agent
Core Responsibility
Ranks project proposals using consistent scoring, producing transparent comparative analysis. This agent operates autonomously within the project selection domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Go Nogo Recommender
Go Nogo Recommender Agent
Core Responsibility
Produces go/no-go recommendations with supporting evidence and conditions for approval. This agent operates autonomously within the project selection domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Scoring Model Builder
Scoring Model Builder Agent
Core Responsibility
Builds weighted scoring models with stakeholder-agreed weights and calibrated scales. This agent operates autonomously within the project selection domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Selection Criteria Designer
Selection Criteria Designer Agent
Core Responsibility
Designs project selection criteria: strategic fit, financial value, risk, feasibility, and resource availability. This agent operates autonomously within the project selection domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.