Write SLOP code with AI assistance - generates correct, safe, and idiomatic SLOP code
Generates correct, safe, and idiomatic SLOP code with proper bounds, schemas, and error handling.
/plugin marketplace add standardbeagle/standardbeagle-tools/plugin install slop-coder@standardbeagle-toolsGenerate SLOP code based on requirements. This command helps you write correct, safe, and idiomatic SLOP code.
/slop-write <description of what you want>
/slop-write an agent that summarizes articles
/slop-write a data pipeline that filters and transforms user records
/slop-write a batch processor that calls an API with rate limiting
When asked to write SLOP code, follow these guidelines:
Always:
limit(), rate(), or bounded collectionsemit for streaming outputtry/catchPrefer:
match expressions for branching# 1. Define helper functions first
def process_item(item):
...
def validate(data):
...
# 2. Main logic
items = get_items()
for item in items with limit(1000):
result = process_item(item)
emit result
# 3. Final output
emit(status: "complete")
For AI agents:
def agent(input):
response = llm.call(
prompt: input,
schema: {output: string}
)
return response.output
result = agent(user_input)
emit result
For data pipelines:
result = data
| filter(x -> condition(x))
| map(x -> transform(x))
| take(limit)
emit result
For batch processing:
for item in items with limit(1000), rate(10/s):
try:
result = process(item)
emit(item: item.id, status: "success")
catch:
emit(item: item.id, status: "failed")
For MCP services:
result = service.method(arg1: val1, arg2: val2)
emit result
When generating SLOP code, produce output in this format:
# Description: <what the code does>
# Input: <expected input>
# Output: <what is emitted>
# Helper functions
def helper_function():
...
# Main logic
# ... processing code ...
# Output
emit(result: result, status: "complete")
Before presenting code, verify:
{variable} syntax| operator correctlyx -> expression or (a, b) -> expression# BAD
for item in stream:
process(item)
# GOOD
for item in stream with limit(1000):
process(item)
# BAD
response = llm.call(prompt: "Question")
# GOOD
response = llm.call(
prompt: "Question",
schema: {answer: string}
)
# BAD
result = risky_call()
# GOOD
try:
result = risky_call()
catch:
emit(error: "Call failed")
stop
# BAD
msg = "Hello " + name + "!"
msg = f"Hello {name}!"
# GOOD
msg = "Hello, {name}!"
# BAD
items.map(x => x * 2)
items.map(lambda x: x * 2)
# GOOD
items | map(x -> x * 2)