From aradotso-trending-skills-37
Guides setup and usage of AlayaRenderer for inverse rendering RGB videos to G-buffers (albedo, normal, depth) and game editing G-buffers+text to stylized videos using fine-tuned diffusion models.
npx claudepluginhub joshuarweaver/cascade-ai-ml-agents-misc-1 --plugin aradotso-trending-skills-37This skill uses the workspace's default tool permissions.
> Skill by [ara.so](https://ara.so) — Daily 2026 Skills collection.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Skill by ara.so — Daily 2026 Skills collection.
AlayaRenderer is a two-stage framework for high-quality video rendering:
git clone --recurse-submodules https://github.com/ShandaAI/AlayaRenderer.git
cd AlayaRenderer
Important: Use
--recurse-submodules— DiffSynth-Studio is a git submodule required for Game Editing.
The two models have conflicting dependencies. Use separate environments:
# Environment 1: Inverse Renderer
conda create -n inverse_renderer python=3.10 -y
conda activate inverse_renderer
cd inverse_renderer
# Follow inverse_renderer/ instructions for Cosmos-Transfer1 setup
# Environment 2: Game Editing
conda create -n game_editing python=3.10 -y
conda activate game_editing
cd game_editing
# Follow DiffSynth-Studio setup instructions
| Model | Base Model | Size | HuggingFace Link |
|---|---|---|---|
| Inverse Renderer | Cosmos-Transfer1-DiffusionRenderer 7B | ~7B params | Brian9999/world_inverse_renderer |
| Game Editing | Wan2.1 1.3B | ~1.3B params | Brian9999/stylerenderer |
# Inverse Renderer — replace the base checkpoint
huggingface-cli download Brian9999/world_inverse_renderer \
--local-dir inverse_renderer/checkpoints/Diffusion_Renderer_Inverse_Cosmos_7B
# Game Editing — place in game_editing models directory
mkdir -p game_editing/models/train/Wan2.1-T2V-1.3B_gbuffer
huggingface-cli download Brian9999/stylerenderer \
--local-dir game_editing/models/train/Wan2.1-T2V-1.3B_gbuffer
The inverse renderer decomposes an RGB video into 5 G-buffer channels: albedo, normal, depth, roughness, metallic.
cd inverse_renderer
# Follow Cosmos-Transfer1-DiffusionRenderer environment setup
# Ensure checkpoint is at:
# inverse_renderer/checkpoints/Diffusion_Renderer_Inverse_Cosmos_7B/
Refer to the inverse_renderer/ subdirectory for the full inference script. The general pattern follows Cosmos-Transfer1-DiffusionRenderer conventions:
# inverse_renderer/run_inverse.py (typical pattern)
import torch
from pathlib import Path
# Input: path to RGB video
input_video = "path/to/rgb_video.mp4"
output_dir = "outputs/gbuffers/"
# The model outputs 5 synchronized channels:
# - albedo (diffuse color)
# - normal (surface orientation)
# - depth (scene geometry)
# - roughness (surface roughness)
# - metallic (metallic property)
cd game_editing
CUDA_VISIBLE_DEVICES=0 python \
examples/wanvideo/model_inference/inference_gbuffer_caption.py \
--checkpoint models/train/Wan2.1-T2V-1.3B_gbuffer/model.safetensors \
--gpu 0 \
--style snowy_winter \
--prompt "the scene is set in a frozen, snow-covered environment under cold, pale winter light with falling snowflakes, creating a silent and ethereal winter wonderland atmosphere." \
--gbuffer_dir test_dataset \
--save_dir outputs/ \
--num_frames 81 \
--height 480 \
--width 832
| Parameter | Description | Example |
|---|---|---|
--checkpoint | Path to fine-tuned .safetensors weights | models/train/Wan2.1-T2V-1.3B_gbuffer/model.safetensors |
--gpu | GPU device index | 0 |
--style | Named style preset | snowy_winter, rainy, night, sunset |
--prompt | Text description of target lighting/atmosphere | See examples below |
--gbuffer_dir | Directory containing G-buffer input frames/video | test_dataset |
--save_dir | Output directory for rendered video | outputs/ |
--num_frames | Number of frames to generate (must be 8n+1) | 81 |
--height | Output height in pixels | 480 |
--width | Output width in pixels | 832 |
test_dataset/
├── albedo/
│ ├── frame_0000.png
│ ├── frame_0001.png
│ └── ...
├── normal/
│ ├── frame_0000.png
│ └── ...
├── depth/
│ ├── frame_0000.png
│ └── ...
├── roughness/
│ ├── frame_0000.png
│ └── ...
└── metallic/
├── frame_0000.png
└── ...
# Cyberpunk night scene
--style night \
--prompt "neon-lit urban environment at night with rain-slicked streets reflecting colorful neon signs, creating a cyberpunk noir atmosphere"
# Golden hour / sunset
--style sunset \
--prompt "warm golden hour lighting with long shadows and a glowing amber sky, soft cinematic atmosphere"
# Rainy urban
--style rainy \
--prompt "overcast rainy day with wet surfaces, soft diffuse lighting, and atmospheric fog creating a moody cinematic look"
# Fantasy / stylized
--style fantasy \
--prompt "magical forest environment with bioluminescent plants, ethereal blue-green lighting, and mystical particle effects"
# Foggy morning
--style foggy \
--prompt "early morning dense fog with soft diffused light creating a mysterious and quiet atmosphere"
# Run on specific GPU
CUDA_VISIBLE_DEVICES=1 python \
examples/wanvideo/model_inference/inference_gbuffer_caption.py \
--checkpoint models/train/Wan2.1-T2V-1.3B_gbuffer/model.safetensors \
--gpu 1 \
--style rainy \
--prompt "heavy rainfall with dark storm clouds and dramatic lightning in the distance" \
--gbuffer_dir my_gbuffers \
--save_dir outputs/rainy_scene \
--num_frames 81 --height 480 --width 832
# Step 1: Extract G-buffers from RGB video (Inverse Renderer env)
conda activate inverse_renderer
cd inverse_renderer
python run_inverse.py \
--input path/to/gameplay_video.mp4 \
--output_dir ../game_editing/test_dataset/
# Step 2: Apply game editing style (Game Editing env)
conda activate game_editing
cd ../game_editing
CUDA_VISIBLE_DEVICES=0 python \
examples/wanvideo/model_inference/inference_gbuffer_caption.py \
--checkpoint models/train/Wan2.1-T2V-1.3B_gbuffer/model.safetensors \
--gpu 0 \
--style snowy_winter \
--prompt "frozen tundra with blizzard conditions, pale blue-white lighting and drifting snow" \
--gbuffer_dir test_dataset \
--save_dir outputs/final_render \
--num_frames 81 --height 480 --width 832
| Demo | URL |
|---|---|
| Game Editing Demo | https://huggingface.co/spaces/Brian9999/game-editing |
| Project Page | https://alaya-studio.github.io/renderer/ |
The AlayaRenderer dataset (release pending) features:
RGB Video Input
│
▼
┌─────────────────────────────────────┐
│ Inverse Renderer │
│ (Cosmos-Transfer1 7B fine-tuned) │
│ RGB → [albedo, normal, depth, │
│ roughness, metallic] │
└─────────────────┬───────────────────┘
│ G-buffers
▼
┌─────────────────────────────────────┐
│ Game Editing │
│ (Wan2.1 1.3B fine-tuned) │
│ G-buffers + Text Prompt │
│ → Stylized RGB Video │
└─────────────────────────────────────┘
# If cloned without --recurse-submodules:
git submodule update --init --recursive
--num_frames (try 41 instead of 81)--height 320 --width 576CUDA_VISIBLE_DEVICES=0num_frames must follow 8n+1 patternValid values: 9, 17, 25, 33, 41, 49, 57, 65, 73, 81
# Valid
--num_frames 81 # 8*10 + 1 ✓
--num_frames 41 # 8*5 + 1 ✓
# Invalid
--num_frames 80 # ✗
--num_frames 60 # ✗
# Verify checkpoint placement
ls game_editing/models/train/Wan2.1-T2V-1.3B_gbuffer/model.safetensors
ls inverse_renderer/checkpoints/Diffusion_Renderer_Inverse_Cosmos_7B/
Always use the two separate conda environments (inverse_renderer and game_editing). Do not install both models' dependencies in one environment.
@article{huang2026generativeworldrenderer,
title={Generative World Renderer},
author={Zheng-Hui Huang and Zhixiang Wang and Jiaming Tan and Ruihan Yu and Yidan Zhang and Bo Zheng and Yu-Lun Liu and Yung-Yu Chuang and Kaipeng Zhang},
journal={arXiv preprint arXiv:2604.02329},
year={2026}
}