From research-skills
Implement scientific analysis code with quality and correctness following research workflow standards. Use when writing research code, implementing algorithms, creating analysis scripts, or developing scientific computations. Triggers on: "implement", "write code", "code this up", "analysis script", "algorithm", "compute", "calculate", or any request to write or implement scientific/numerical code.
npx claudepluginhub cailmdaley/skills --plugin research-skillsThis skill uses the workspace's default tool permissions.
**Core philosophy**: Write it right the first time — clean, concise, conceptually dense code that doesn't need editing afterward. Zero linting violations (ruff or project-specified tools). Check CLAUDE.md for project-specific standards.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Core philosophy: Write it right the first time — clean, concise, conceptually dense code that doesn't need editing afterward. Zero linting violations (ruff or project-specified tools). Check CLAUDE.md for project-specific standards.
Fit as many related operations into one line as possible, where each line has an understandable big-picture purpose. Each line = one complete concept. 88 char max (ruff default).
# Inline calculations in dict construction — not scattered assignments
results = {
"chi2_E": hartlap_factor * (En @ np.linalg.solve(cov_E, En)),
"pte_B": 1 - stats.chi2.cdf(chi2_B, nmodes),
"cov": np.cov(np.array(samples).T),
}
# Eliminate repetition with comprehensions
chi_E, chi_B, chi_EB = [
np.sum(signal**2 / noise**2)
for signal, noise in zip((ee, bb, eb), (noise_ee, noise_bb, noise_eb))
]
# Consolidate repeated plotting blocks into loops
for ax, (cl, err, fmt, label) in zip(axes, plot_data):
ax.errorbar(ell_bins, cl, yerr=err, fmt=fmt)
ax.set(xlabel=r"$\ell$", ylabel=label, title=f"{label} Power Spectrum")
# One-line conditionals over verbose if/else blocks
n_eff = n_samples if cov_path_int is not None else npatch
version_results = results_list[idx] or self.calculate_pure_eb()
output_path = user_path or generate_default_path()
# Short-circuit execution
(var_method == "semianalytic") and self.calculate_semianalytic_cov()
When code exceeds 88 chars, break at logical boundaries:
[ and before fortemp = solve(A, b); result = factor * temp -- combine into one conceptual lineresults["a"] = a; results["b"] = b -- use dict construction with inline calculationsor.get() with defaults: config.get("key", default) hides missing config -- use config["key"] to fail fastComments explain why and context, never what. Good: # Hartlap correction for finite sample bias. Bad: # Create numpy array. Remove artifacts like "change to scipy implementation" from library code.
Trust scientific libraries to validate their domains. Skip defensive programming. Let errors propagate -- failed fast is better than hidden bugs. Use defensive patterns only for known edge cases with clear scientific justification.