Wrapper around gemini CLI for non-interactive runs (prompt via args/file/stdin) with resume support. Outputs session_id for conversation continuation.
From agentic-coding-toolsnpx claudepluginhub thesylvester/agentic-coding-tools --plugin read-transcriptThis skill is limited to using the following tools:
scripts/gemini-agentSearches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Thin wrapper around gemini CLI for non-interactive use:
PROMPT_FILE, or stdin--model or GEMINI_MODEL env var--resume or GEMINI_SESSION env var~/.gemini/oauth_creds.json) for authenticationGemini CLI v0.20.0+ required (for --resume support):
npm install -g @google/gemini-cli@latest
gemini --version # should be 0.20.0 or higher
Authentication (one of these):
gemini interactively once to complete OAuth login (creates ~/.gemini/oauth_creds.json)GEMINI_API_KEY environment variableGOOGLE_GENAI_USE_GCA=true with existing OAuth credsChoose ONE method based on your situation:
| Situation | Method | Example |
|---|---|---|
| Short inline prompt | Args | gemini-agent "Explain X" |
| Prompt already exists in a file | PROMPT_FILE | PROMPT_FILE=task.md gemini-agent |
| Building prompt dynamically | Heredoc | gemini-agent <<'EOF'<br>your prompt<br>EOF |
| Piping from another command | Stdin | cat file.md | gemini-agent |
Key concept: PROMPT_FILE means "read this file's contents AS the prompt" - the entire file becomes your prompt.
gemini-agent "Explain how async/await works in JavaScript"
gemini-agent "Review this code for bugs: $(cat snippet.js)"
# The contents of task.md become the prompt
PROMPT_FILE=task.md gemini-agent
# With model selection
PROMPT_FILE=analysis-request.md GEMINI_MODEL=gemini-2.0-flash gemini-agent
gemini-agent <<'EOF'
Context: We're building a VS Code extension.
Question: What's the best way to handle webview state persistence?
Please provide:
1. Recommended approach
2. Code example
3. Common pitfalls
EOF
# Pipe file contents
cat requirements.md | gemini-agent
# Pipe command output
git diff HEAD~3 | gemini-agent "Review these changes:"
Don't combine input methods - use exactly ONE:
# WRONG: PROMPT_FILE with heredoc
PROMPT_FILE=/dev/stdin gemini-agent <<'EOF'
prompt here
EOF
# CORRECT: Just use heredoc directly
gemini-agent <<'EOF'
prompt here
EOF
# WRONG: PROMPT_FILE pointing to stdin
PROMPT_FILE=/dev/stdin gemini-agent
# CORRECT: Use pipe or heredoc
cat myfile.txt | gemini-agent
# WRONG: Mixing PROMPT_FILE with piped input
echo "extra" | PROMPT_FILE=task.md gemini-agent
# (stdin is ignored, only PROMPT_FILE is used - you'll get a warning)
# CORRECT: Choose one source
PROMPT_FILE=task.md gemini-agent
# OR
cat task.md extra.txt | gemini-agent
Capture the [session_id: ...] from output to continue conversations:
# Resume most recent session
gemini-agent --resume latest "Follow-up question"
# Resume by UUID (from previous output)
gemini-agent --resume c9283e30-910e-442c-b567-48ac9e1fab03 "Continue"
# Resume by index (use `gemini --list-sessions` to see available)
gemini-agent --resume 3 "Continue from session 3"
# Via environment variable
GEMINI_SESSION=c9283e30-910e-442c-b567-48ac9e1fab03 gemini-agent "Follow-up"
# Via flag
gemini-agent --model gemini-2.0-flash "Your prompt"
# Via environment variable
GEMINI_MODEL=gemini-2.0-flash gemini-agent "Your prompt"
| Variable | Description |
|---|---|
PROMPT_FILE | Path to file whose contents become the prompt |
GEMINI_MODEL | Model to use (passed as --model) |
GEMINI_SESSION | Session to resume (latest, UUID, or index number) |
GEMINI_API_KEY | API key authentication |
GOOGLE_GENAI_USE_GCA | Set to true for Gemini Code Assist auth |
Returns plain text response followed by the session ID:
<response text>
[session_id: <uuid>]
Capture the session ID to continue the conversation with --resume <uuid>.
Diagnostics are shown automatically on any failure, including:
You can also run diagnostics manually:
gemini-agent --version
gemini interactively once to set up OAuth, or set GEMINI_API_KEYnpm install -g @google/gemini-cligemini --version to check)gemini -y -o json -p <prompt> in non-interactive YOLO mode with JSON output to capture session_idc9283e30-910e-442c-b567-48ac9e1fab03), not partial UUIDsgemini --list-sessions to see available sessions for current projectFor reference, session files are stored at:
~/.gemini/tmp/{project_hash}/chats/session-*.json{project_hash} is the SHA256 hash of the working directory pathgemini --list-sessions to list available sessions without needing to navigate the file structure