Universal coding standards and best practices. Activates when writing new code, reviewing code quality, refactoring, or enforcing naming, formatting, or structural consistency. Covers immutability, file organization, error handling, input validation, and naming conventions.
From sentinelnpx claudepluginhub digistrique-solutions/strique-marketplace --plugin sentinelThis skill uses the workspace's default tool permissions.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
Universal coding standards applicable across all projects and languages. These are non-negotiable quality standards.
ALWAYS create new objects. NEVER mutate existing ones.
Immutable data prevents hidden side effects, makes debugging easier, enables safe concurrency, and makes state changes explicit and traceable.
// WRONG: mutates the original object
user.name = "New Name";
items.push(newItem);
delete config.oldKey;
// CORRECT: creates new objects
const updatedUser = { ...user, name: "New Name" };
const updatedItems = [...items, newItem];
const { oldKey, ...updatedConfig } = config;
# WRONG: mutates the original
user["name"] = "New Name"
items.append(new_item)
# CORRECT: creates new objects
updated_user = {**user, "name": "New Name"}
updated_items = [*items, new_item]
When mutating deliberately, add a comment explaining why:
# Deliberately mutating for performance: this array has 100K+ elements
# and copying would cause GC pressure in the hot loop
items.append(new_item)
| Metric | Target | Maximum |
|---|---|---|
| Lines per file | 200-400 | 800 |
| Functions per file | 5-15 | 25 |
| Exports per file | 1-5 | 10 |
users/service.py over services/user_service.py)Signs a file needs splitting:
How to split:
A function that exceeds 50 lines is doing too much. Split it.
# BAD: 80-line function doing everything
def process_order(order):
# validate (15 lines)
# calculate totals (20 lines)
# apply discounts (15 lines)
# save to database (10 lines)
# send notification (10 lines)
# update inventory (10 lines)
...
# GOOD: orchestrator with focused helpers
def process_order(order: Order) -> ProcessedOrder:
validated = validate_order(order)
totals = calculate_totals(validated)
discounted = apply_discounts(totals)
saved = save_order(discounted)
notify_order_placed(saved)
update_inventory(saved)
return saved
# Step 1:, # Step 2: within the function bodyNever silently swallow errors. Every error should be:
# WRONG: silently swallows all errors
def get_data(id):
try:
return service.fetch(id)
except Exception:
return None
# CORRECT: handles specific errors, propagates unexpected ones
def get_data(id: str) -> Data:
try:
return service.fetch(id)
except NotFoundError:
logger.info("data_not_found", id=id)
raise
except ConnectionError as e:
logger.error("service_unavailable", id=id, error=str(e))
raise ServiceUnavailableError(f"Cannot reach data service: {e}") from e
// WRONG: empty catch block
try {
await fetchData(id);
} catch (error) {
// do nothing
}
// CORRECT: handle and provide context
try {
await fetchData(id);
} catch (error) {
if (error instanceof NotFoundError) {
logger.info("data_not_found", { id });
return null;
}
logger.error("fetch_failed", { id, error: String(error) });
throw new ServiceError(`Failed to fetch data: ${error}`);
}
System boundaries include:
# Boundary: validate at the API endpoint
from pydantic import BaseModel, Field
class CreateUserRequest(BaseModel):
email: str = Field(..., pattern=r"^[\w.+-]+@[\w-]+\.[\w.]+$")
name: str = Field(..., min_length=1, max_length=200)
age: int = Field(..., ge=0, le=150)
# Interior: function receives validated, typed data
def create_user(request: CreateUserRequest) -> User:
# No need to re-validate here -- the boundary did it
return User(email=request.email, name=request.name, age=request.age)
// Boundary: validate at the API route
import { z } from "zod";
const CreateUserSchema = z.object({
email: z.string().email(),
name: z.string().min(1).max(200),
age: z.number().int().min(0).max(150),
});
export async function POST(request: Request) {
const body = await request.json();
const validated = CreateUserSchema.parse(body);
// validated is fully typed from here on
return createUser(validated);
}
BAD: d, tmp, val, x, res, cb, fn, cfg
GOOD: duration, temporaryFile, userCount, response, callback, formatter, config
BAD: data(), user(), process()
GOOD: fetchData(), createUser(), processOrder()
BAD: active, valid, loading
GOOD: isActive, isValid, isLoading, hasPermission, canEdit, shouldRetry
BAD: maxRetries = 3, defaultTimeout = 5000
GOOD: MAX_RETRIES = 3, DEFAULT_TIMEOUT_MS = 5000
BAD: userData, createUser, userHelper
GOOD: User, UserService, UserRepository
Deep nesting makes code hard to read and reason about.
# BAD: 5+ levels of nesting
def process(users, filters, options):
if users:
for user in users:
if user.active:
for filter in filters:
if filter.matches(user):
if options.get("verbose"):
log(user)
results.append(user)
# GOOD: early returns and extraction
def process(users, filters, options):
if not users:
return []
active_users = [u for u in users if u.active]
matched = [u for u in active_users if any(f.matches(u) for f in filters)]
if options.get("verbose"):
for user in matched:
log(user)
return matched
if not valid: return error instead of if valid: ... (100 lines)Before marking work complete: