Execute Groq secondary workflow: Core Workflow B. Use when implementing secondary use case, or complementing primary workflow. Trigger with phrases like "groq secondary workflow", "secondary task with groq".
From groq-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin groq-packThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Secondary workflow for Groq. Complements the primary inference workflow by focusing on batch processing, model benchmarking, and advanced prompt optimization. Use this skill when you need to evaluate multiple Groq models against the same prompt set, run A/B tests on different prompting strategies, or process large volumes of inference requests within API quota constraints.
groq-install-auth setupgroq-core-workflow-aDefine your prompt dataset and the set of model variants or parameter configurations you want to test. Structure the evaluation criteria: what constitutes a good response for your use case — accuracy, conciseness, format compliance, or latency. Implement a scoring function or evaluation rubric you can apply consistently across all model outputs.
// Step 1 implementation
Submit the prompt batch to each model configuration in turn, respecting rate limits between requests. Collect all responses along with latency and token count metadata. Apply your scoring function to each response and record the results. For high-volume batches, parallelize requests up to your rate limit and implement exponential backoff for any throttled calls.
// Step 2 implementation
Aggregate the evaluation scores and produce a comparison report showing which model or configuration performed best across your prompt set. Commit the winning configuration to your application settings. Archive the full evaluation dataset including prompts, responses, and scores for future regression testing.
// Step 3 implementation
| Aspect | Workflow A | Workflow B |
|---|---|---|
| Use Case | Primary | Secondary |
| Complexity | Medium | Lower |
| Performance | Standard | Optimized |
// Complete workflow example
// Error handling code
For common errors, see groq-common-errors.