From speak-pack
Create your first Speak AI tutoring session with pronunciation feedback. Use when starting a new Speak integration, testing your setup, or learning basic language learning API patterns. Trigger with phrases like "speak hello world", "speak example", "speak quick start", "first speak lesson".
npx claudepluginhub flight505/skill-forge --plugin speak-packThis skill is limited to using the following tools:
Create your first AI tutoring session with Speak. Demonstrates conversation practice, pronunciation assessment, and real-time feedback using GPT-4o-powered tutoring.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Create your first AI tutoring session with Speak. Demonstrates conversation practice, pronunciation assessment, and real-time feedback using GPT-4o-powered tutoring.
speak-install-auth setupimport { SpeakClient } from '@speak/language-sdk';
const client = new SpeakClient({
apiKey: process.env.SPEAK_API_KEY!,
appId: process.env.SPEAK_APP_ID!,
language: 'es',
});
// Start a beginner Spanish lesson
const session = await client.startConversation({
scenario: 'greetings',
language: 'es',
level: 'beginner',
nativeLanguage: 'en',
});
console.log('Session ID:', session.id);
console.log('AI Tutor:', session.firstPrompt.text);
// Output: "Hola! Bienvenido a tu leccion de espanol. Como te llamas?"
console.log('Audio URL:', session.firstPrompt.audioUrl);
// Submit text response (or audio file for pronunciation scoring)
const turn = await client.sendTurn(session.id, {
text: 'Hola, me llamo Juan. Mucho gusto.',
// Or: audioPath: './recordings/response.wav'
});
console.log('Tutor response:', turn.tutorText);
console.log('Pronunciation score:', turn.pronunciationScore); // 0-100
console.log('Grammar corrections:', turn.corrections);
// Output: [{original: "me llamo", suggestion: null, correct: true}]
console.log('Vocabulary notes:', turn.vocabularyNotes);
// Assess pronunciation of a specific phrase
const assessment = await client.assessPronunciation({
audioPath: './recordings/hola-como-estas.wav',
targetText: 'Hola, como estas?',
language: 'es',
detailLevel: 'phoneme', // 'word' or 'phoneme'
});
console.log(`Overall score: ${assessment.score}/100`);
for (const word of assessment.words) {
console.log(` "${word.text}": ${word.score}/100`);
if (word.phonemes) {
for (const p of word.phonemes.filter(p => p.score < 70)) {
console.log(` Weak phoneme: ${p.symbol} (${p.score}) - ${p.suggestion}`);
}
}
}
const summary = await client.endSession(session.id);
console.log('Session Summary:');
console.log(` Duration: ${summary.durationMinutes} min`);
console.log(` Turns: ${summary.totalTurns}`);
console.log(` Pronunciation: ${summary.avgPronunciationScore}/100`);
console.log(` Grammar: ${summary.grammarAccuracy}%`);
console.log(` New vocabulary: ${summary.newWords.join(', ')}`);
| Error | Cause | Solution |
|---|---|---|
| Session timeout | Exceeded max duration | Start a new session |
| Audio format invalid | Wrong codec or sample rate | Convert to WAV 16kHz mono |
| Language not supported | Invalid language code | Use supported codes (es, ko, ja, fr, de) |
| Low pronunciation score | Background noise | Record in a quiet environment |
| Rate limit exceeded | Too many requests | Wait and retry with backoff |
Proceed to speak-local-dev-loop for development workflow setup.
Text-only test: Skip audio and use text responses to test the conversation flow before integrating microphone input.
Multi-language: Start sessions in different languages by changing the language parameter to ko (Korean), ja (Japanese), or fr (French).