Integrates Google Gemini API into Node.js, Python, and browser projects with multimodal inputs, streaming, function calling, model selection, and production practices.
From antigravity-awesome-skillsnpx claudepluginhub sickn33/antigravity-awesome-skills --plugin antigravity-awesome-skillsThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
This skill guides AI agents through integrating Google Gemini API into applications — from basic text generation to advanced multimodal, function calling, and streaming use cases. It covers the full Gemini SDK lifecycle with production-grade patterns.
Node.js / TypeScript:
npm install @google/generative-ai
Python:
pip install google-generativeai
Set your API key securely:
export GEMINI_API_KEY="your-api-key-here"
Node.js:
import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
const result = await model.generateContent("Explain async/await in JavaScript");
console.log(result.response.text());
Python:
import google.generativeai as genai
import os
genai.configure(api_key=os.environ["GEMINI_API_KEY"])
model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content("Explain async/await in JavaScript")
print(response.text)
const result = await model.generateContentStream("Write a detailed blog post about AI");
for await (const chunk of result.stream) {
process.stdout.write(chunk.text());
}
import fs from "fs";
const imageData = fs.readFileSync("screenshot.png");
const imagePart = {
inlineData: {
data: imageData.toString("base64"),
mimeType: "image/png",
},
};
const result = await model.generateContent(["Describe this image:", imagePart]);
console.log(result.response.text());
const tools = [{
functionDeclarations: [{
name: "get_weather",
description: "Get current weather for a city",
parameters: {
type: "OBJECT",
properties: {
city: { type: "STRING", description: "City name" },
},
required: ["city"],
},
}],
}];
const model = genAI.getGenerativeModel({ model: "gemini-1.5-pro", tools });
const result = await model.generateContent("What's the weather in Mumbai?");
const call = result.response.functionCalls()?.[0];
if (call) {
// Execute the actual function
const weatherData = await getWeather(call.args.city);
// Send result back to model
}
const chat = model.startChat({
history: [
{ role: "user", parts: [{ text: "You are a helpful coding assistant." }] },
{ role: "model", parts: [{ text: "Sure! I'm ready to help with code." }] },
],
});
const response = await chat.sendMessage("How do I reverse a string in Python?");
console.log(response.response.text());
| Model | Best For | Speed | Cost |
|---|---|---|---|
gemini-1.5-flash | High-throughput, cost-sensitive tasks | Fast | Low |
gemini-1.5-pro | Complex reasoning, long context | Medium | Medium |
gemini-2.0-flash | Latest fast model, multimodal | Very Fast | Low |
gemini-2.0-pro | Most capable, advanced tasks | Slow | High |
gemini-1.5-flash for most tasks — it's fast and cost-effectivesystemInstruction to set persistent model behaviorgemini-pro for simple tasks — Flash is cheaper and fastertry {
const result = await model.generateContent(prompt);
return result.response.text();
} catch (error) {
if (error.status === 429) {
// Rate limited — wait and retry with exponential backoff
await new Promise(r => setTimeout(r, 2 ** retryCount * 1000));
} else if (error.status === 400) {
// Invalid request — check prompt or parameters
console.error("Invalid request:", error.message);
} else {
throw error;
}
}
Problem: API_KEY_INVALID error
Solution: Ensure GEMINI_API_KEY environment variable is set and the key is active in Google AI Studio.
Problem: Response blocked by safety filters
Solution: Check result.response.promptFeedback.blockReason and adjust your prompt or safety settings.
Problem: Slow response times
Solution: Switch to gemini-1.5-flash and enable streaming. Consider caching repeated prompts.
Problem: RESOURCE_EXHAUSTED (quota exceeded)
Solution: Check your quota in Google Cloud Console. Implement request queuing and exponential backoff.