From aradotso-trending-skills-37
Strips filler, verbosity, and slop from LLM responses using a copy-paste system prompt. Reduces output ~73% without info loss; paste into ChatGPT, Claude, Cursor, or APIs.
npx claudepluginhub joshuarweaver/cascade-ai-ml-agents-misc-1 --plugin aradotso-trending-skills-37This skill uses the workspace's default tool permissions.
> Skill by [ara.so](https://ara.so) — Daily 2026 Skills collection.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Skill by ara.so — Daily 2026 Skills collection.
talk-normal is a system prompt (plus a shell-script helper) that strips AI slop — bullet-point padding, hollow affirmations, corporate filler — from any LLM while preserving all useful information. Tested at ~73% character reduction on GPT-4o-mini and GPT-5.4 with no information loss.
The project is a single prompt.md file (the system prompt) plus optional shell helpers. You copy the prompt text into the "System" field of any LLM interface or API call.
repo layout
├── prompt.md ← the system prompt (main artifact)
├── CHANGELOG.md ← rule history
├── CONTRIBUTING.md ← how to add rules
└── TEST_RESULTS.md ← before/after comparisons
git clone https://github.com/hexiecs/talk-normal.git
cd talk-normal
cat prompt.md
Paste the contents of prompt.md into:
.cursorrules or global AI rulessystem parameter (see examples below)import os
from pathlib import Path
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
system_prompt = Path("prompt.md").read_text()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What is Python?"},
],
)
print(response.choices[0].message.content)
SYSTEM=$(cat prompt.md | jq -Rs .)
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d "{
\"model\": \"gpt-4o-mini\",
\"messages\": [
{\"role\": \"system\", \"content\": $SYSTEM},
{\"role\": \"user\", \"content\": \"What is Python?\"}
]
}"
import os
from pathlib import Path
import anthropic
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
system_prompt = Path("prompt.md").read_text()
message = client.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
system=system_prompt,
messages=[{"role": "user", "content": "Explain Docker in one paragraph."}],
)
print(message.content[0].text)
import os
from pathlib import Path
import google.generativeai as genai
genai.configure(api_key=os.environ["GEMINI_API_KEY"])
system_prompt = Path("prompt.md").read_text()
model = genai.GenerativeModel(
model_name="gemini-1.5-flash",
system_instruction=system_prompt,
)
response = model.generate_content("What is a neural network?")
print(response.text)
SYSTEM=$(cat prompt.md)
ollama run llama3 \
--system "$SYSTEM" \
"What is a REST API?"
Or via the Ollama Python SDK:
import subprocess, json
from pathlib import Path
system_prompt = Path("prompt.md").read_text()
result = subprocess.run(
["ollama", "run", "llama3"],
input=f"SYSTEM: {system_prompt}\nUSER: What is a REST API?",
capture_output=True, text=True,
)
print(result.stdout)
A reusable shell function that injects the prompt automatically:
# Add to ~/.bashrc or ~/.zshrc
export TALK_NORMAL_PROMPT="$HOME/talk-normal/prompt.md"
asknormal() {
local question="$*"
local system
system=$(cat "$TALK_NORMAL_PROMPT")
curl -s https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d "$(jq -n \
--arg sys "$system" \
--arg q "$question" \
'{model:"gpt-4o-mini",messages:[{role:"system",content:$sys},{role:"user",content:$q}]}'
)" | jq -r '.choices[0].message.content'
}
Usage:
source ~/.bashrc
asknormal "What is the CAP theorem?"
.cursorrules)# Prepend talk-normal to your existing rules
cat talk-normal/prompt.md > .cursorrules
echo "" >> .cursorrules
echo "# Project-specific rules below" >> .cursorrules
cat your-existing-rules.md >> .cursorrules
import os
from pathlib import Path
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
system_prompt = Path("talk-normal/prompt.md").read_text()
assistant = client.beta.assistants.create(
name="Normal Assistant",
instructions=system_prompt,
model="gpt-4o-mini",
)
print(f"Assistant ID: {assistant.id}")
talk-normal rules are additive — prepend them before your domain instructions:
from pathlib import Path
talk_normal = Path("talk-normal/prompt.md").read_text()
your_rules = """
You are a senior backend engineer. Answer questions about Python, Go, and distributed systems.
"""
combined_system = f"{talk_normal}\n\n---\n\n{your_rules}"
def verbosity_ratio(before: str, after: str) -> float:
"""Returns fraction of original length kept (lower = more concise)."""
return len(after) / len(before)
before = "Python is a high-level, interpreted programming language known for its readability..." # 1583 chars
after = "Python is a high-level, interpreted language known for readability..." # 513 chars
print(f"{verbosity_ratio(before, after):.0%} of original length") # → 32%
import os
from pathlib import Path
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
system_prompt = Path("talk-normal/prompt.md").read_text()
question = "What is Kubernetes?"
def ask(system: str | None, user: str) -> str:
messages = []
if system:
messages.append({"role": "system", "content": system})
messages.append({"role": "user", "content": user})
resp = client.chat.completions.create(model="gpt-4o-mini", messages=messages)
return resp.choices[0].message.content
without = ask(None, question)
with_prompt = ask(system_prompt, question)
print(f"Without: {len(without)} chars")
print(f"With: {len(with_prompt)} chars")
print(f"Reduction: {(1 - len(with_prompt)/len(without)):.0%}")
# Pull latest rules from upstream
cd talk-normal
git pull origin main
# Check what changed
git log --oneline -10
cat CHANGELOG.md | head -50
git checkout -b rule/no-em-dashesprompt.md — add your rule in plain imperative EnglishCHANGELOG.md# Quick before/after test for your new rule
SYSTEM=$(cat prompt.md)
echo "Test question" | asknormal # uses your modified prompt
| Symptom | Fix |
|---|---|
| Model still uses bullet points | Ensure the system prompt is in the system role, not prepended to user |
| Prompt too long for context window | Use a smaller model or trim older messages; prompt.md is intentionally compact |
| Ollama ignores system prompt | Some quantized models have weak instruction-following; try mistral or llama3 |
| Rules conflict with your own system prompt | Put talk-normal rules first; add # Override: comment before conflicting rules |
| Response is too terse / lost information | The prompt reduces filler, not facts — file an issue with a reproduction case |
prompt.md — copy its text verbatim as the system messagepip install, no build step