By noizai
AI skill for simulating conversations with any character or persona. Enables interactive dialogue generation for entertainment and educational purposes.
npx claudepluginhub joshuarweaver/cascade-content-creation-misc-1 --plugin noizai-skills-2Use this skill whenever the user wants speech to sound more human, companion-like, or emotionally expressive. Triggers include: any mention of 'say like', 'talk like', 'speak like', 'companion voice', 'comfort me', 'cheer me up', 'sound more human', 'good night voice', 'good morning voice', or requests to add fillers, emotion, or personality to generated speech. Also use when the user wants to mimic a specific character's voice, apply speaking style presets (goodnight, morning, comfort, celebration, chatting), tune emotional parameters like warmth or tenderness, or make TTS output feel like a real person talking. If the user asks for a 'voice message', 'companion audio', 'character voice', or wants speech that sighs, laughs, hesitates, or sounds genuinely warm, use this skill. Do NOT use for plain text-to-speech without personality, music generation, sound effects, or general coding tasks unrelated to expressive speech.
Chat with any real person or fictional character in their own voice by automatically finding their speech online, extracting a clean reference sample, and generating audio replies. Also supports generating a matching voice from an uploaded image. Use when the user says "我想跟xxx聊天", "你来扮演xxx跟我说话", "让xxx给我讲讲这篇文章", "我想跟图片中的人说话", or similar.
Fetches the latest news using news-aggregator-skill, formats it into a podcast script in Markdown format, and uses the tts skill to generate a podcast audio file. Use when the user asks to get the latest news and read it out as a podcast.
Use this skill whenever the user wants to generate sound effects, ambient audio, or short audio clips from a text description. Triggers include: any mention of 'sound effect', 'sfx', 'generate sound', 'make a sound', 'audio effect', 'ambient sound', 'foley', 'sound clip', 'noise', or requests to produce a specific sound (e.g. 'make a gunshot sound', 'generate thunder', 'create the sound of rain'). Also use when the user describes an action or scenario and wants the corresponding audio (e.g. 'someone getting spanked', 'a door slamming', 'cartoon boing'). Do NOT use for speech synthesis, music generation with melody/lyrics, or voice cloning.
Reusable template for authoring new Agent Skills with clear triggers, workflow, and I/O contracts.
Use this skill whenever the user wants to convert text into speech, generate audio from text, or produce voiceovers. Triggers include: any mention of 'TTS', 'text to speech', 'speak', 'say', 'voice', 'read aloud', 'audio narration', 'voiceover', 'dubbing', or requests to turn written content into spoken audio. Also use when converting EPUB/PDF/SRT/articles to audio, cloning voices from reference audio, controlling emotion or speed in speech, aligning speech to subtitle timelines, or producing per-segment voice-mapped audio.
Translate and dub videos from one language to another, replacing the original audio with TTS while keeping the video intact.
English | 简体中文
Central repository for managing Skills to "human" vibe-talking.
npx skills add# List skills from GitHub repository
npx skills add NoizAI/skills --list --full-depth
# Install a specific skill from GitHub repository
npx skills add NoizAI/skills --full-depth --skill tts -y
# Install from GitHub repository
npx skills add NoizAI/skills
# Local development (run in this repo directory)
npx skills add . --list --full-depth
| Name | Description | Documentation | Run command |
|---|---|---|---|
| tts | Convert text into speech with Kokoro or Noiz: simple mode, timeline-aligned rendering, precise duration control, and reference-audio voice cloning. | SKILL.md | npx skills add NoizAI/skills --full-depth --skill tts -y |
| chat-with-anyone | Chat with any real person or fictional character in their own voice by automatically finding their speech online, extracting a clean reference sample, and generating audio replies. | SKILL.md | npx skills add NoizAI/skills --full-depth --skill chat-with-anyone -y |
| characteristic-voice | Make generated speech feel companion-like with fillers, emotional tuning, and preset speaking styles. | SKILL.md | npx skills add NoizAI/skills --full-depth --skill characteristic-voice -y |
| video-translation | Translate and dub videos from one language to another, replacing the original audio with TTS while keeping the video intact. | SKILL.md | npx skills add NoizAI/skills --full-depth --skill video-translation -y |
| daily-news-caster | Fetch the latest real-time news and automatically generate a dual-host conversational podcast with audio. | SKILL.md | npx skills add NoizAI/skills --full-depth --skill daily-news-caster -y |
| sound-fx | Generate any sound effect from a text description — animals, ambience, cartoon sounds, sci-fi, and more. One command, 1–30 seconds, WAV/MP3/FLAC output. | SKILL.md | npx skills add NoizAI/skills --full-depth --skill sound-fx -y |
For example, characteristic-voice
bash skills/characteristic-voice/scripts/speak.sh \
--preset comfort -t "Hmm... I'm right here." -o comfort.wav
Sample outputs for quick listening (MP4 for inline playback):
https://github.com/user-attachments/assets/e1e75371-49e2-4858-9993-428d999c3723
https://github.com/user-attachments/assets/d2e6472d-9edf-449d-a5ee-51ad7e19a861
https://github.com/user-attachments/assets/e8f78ffa-7f12-4475-b1af-09161b3ee01b
https://github.com/user-attachments/assets/0d3b8af9-2288-4a63-9246-2748ed232b0e
For the best experience (faster, emotion control, voice cloning), get your API key from developers.noiz.ai/api-keys:
bash skills/tts/scripts/tts.sh config --set-api-key YOUR_KEY
The key is persisted to ~/.noiz_api_key and loaded automatically. Alternatively, pass --backend kokoro to use the local Kokoro backend.
For skill authoring rules, directory conventions, and PR guidance, see CONTRIBUTING.md.
Ultra-compressed communication mode. Cuts ~75% of tokens while keeping full technical accuracy by speaking like a caveman.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns
Creative skill for generating algorithmic and generative art. Produces visual designs using mathematical patterns, fractals, and procedural generation.
Frontend design skill for UI/UX implementation
Humanise text and remove AI writing patterns. Detects and fixes 24 AI tell-tales including inflated language, promotional tone, AI vocabulary, filler phrases, sycophantic tone, and formulaic structure.
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). Proactively activates in projects with cacheComponents: true, providing patterns for 'use cache' directive, cacheLife(), cacheTag(), cache invalidation, and parameter permutation rendering.