Build amplifier-foundation modules using "bricks and studs" architecture. Covers tool, hook, provider, context, and orchestrator modules with testing, publishing, and best practices.
This skill inherits all available tools. When active, it can use any tool Claude has access to.
references/API_PATTERNS.mdreferences/CONTRIBUTING.mdreferences/DEVELOPMENT_WORKFLOW.mdreferences/EXAMPLES.mdreferences/MODULAR_BUILDER.mdreferences/MODULE_TYPES.mdreferences/README.mdreferences/REPOSITORY_RULES.mdreferences/TESTING_GUIDE.mdBuild production-ready amplifier-foundation modules using "bricks and studs" architecture
This skill teaches you how to create well-designed, tested, and maintainable modules for the amplifier-foundation ecosystem. Whether you're extending agent capabilities with tools, observing lifecycle events with hooks, or managing conversation state with context modules, this guide will show you the patterns and practices that lead to successful modules.
A module in amplifier-foundation is a self-contained, regeneratable unit that extends the capabilities of AI agents. Each module:
Think of modules as LEGO bricks: each piece has a specific purpose, clear connection points (studs), and can be combined with other pieces to build something larger.
The amplifier-foundation architecture is built on the principle of "bricks and studs":
Bricks (Internal Implementation):
Studs (Public Interface):
mount(coordinator, config) function__init__.pyJust as LEGO bricks hide their internal structure while exposing uniform studs for connection, modules should:
Building modules provides several benefits:
Reusability: Write once, use across many agent applications
Testability: Small, focused units are easier to test thoroughly
Maintainability: Clear boundaries make updates safer
Composability: Mix and match capabilities
Community: Share your modules, use modules from others
Modules are part of the broader amplifier ecosystem:
When you build a module, you're contributing to a growing ecosystem of composable AI capabilities.
Amplifier-foundation supports five module types, each serving a distinct purpose in the agent architecture:
| Type | Purpose | Entry Point | Example Use Cases |
|---|---|---|---|
| Orchestrator | Controls agent execution loop | amplifier.orchestrators | Basic loop, streaming responses, event-driven |
| Provider | Connects to AI model APIs | amplifier.providers | Anthropic, OpenAI, Azure, local models |
| Tool | Extends agent capabilities | amplifier.tools | File system, web search, bash, database |
| Context | Manages conversation state | amplifier.contexts | Simple memory, persistent storage, summaries |
| Hook | Observes lifecycle events | amplifier.hooks | Logging, approval gates, metrics, redaction |
Purpose: Control how the agent executes turns, manages tool calls, and handles streaming responses.
Key Characteristics:
When to Build: You need custom execution logic (e.g., parallel tool calls, custom retry logic, specialized streaming)
Examples: loop-basic, loop-streaming, loop-events
Purpose: Connect to AI model APIs and abstract away vendor-specific details.
Key Characteristics:
When to Build: You want to support a new AI model API or custom model deployment
Examples: anthropic, openai, azure, bedrock
Purpose: Extend what the agent can do by providing callable functions.
Key Characteristics:
When to Build: You want the agent to interact with external systems or perform specific operations
Examples: tool-filesystem, tool-bash, tool-search, tool-database
Purpose: Manage conversation state and inject relevant information into prompts.
Key Characteristics:
When to Build: You need specialized memory management or context injection logic
Examples: context-simple, context-persistent, context-memory
Purpose: Observe and react to lifecycle events without blocking execution.
Key Characteristics:
When to Build: You want to observe, log, or react to agent events
Examples: hooks-logging, hooks-approval, hooks-metrics, hooks-redaction
Before building modules, ensure you have:
curl -LsSf https://astral.sh/uv/install.sh | sh)uv pip install amplifier-foundation)Let's build a simple tool that converts text to uppercase.
mkdir -p amplifier-module-tool-uppercase
cd amplifier-module-tool-uppercase
mkdir -p amplifier_module_tool_uppercase tests
[project]
name = "amplifier-module-tool-uppercase"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = ["amplifier-foundation"]
[project.entry-points."amplifier.tools"]
uppercase = "amplifier_module_tool_uppercase:mount"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
# amplifier_module_tool_uppercase/__init__.py
from typing import Any
async def mount(coordinator: Any, config: dict) -> dict[str, Any]:
"""Mount the uppercase tool."""
async def uppercase(text: str) -> str:
"""Convert text to uppercase.
Args:
text: The text to convert
Returns:
The text in uppercase
"""
return text.upper()
return {
"uppercase": uppercase
}
def get_schema() -> dict:
"""Return JSON schema for tool functions."""
return {
"uppercase": {
"description": "Convert text to uppercase",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The text to convert"
}
},
"required": ["text"]
}
}
}
# tests/test_uppercase.py
import pytest
from amplifier_module_tool_uppercase import mount, get_schema
@pytest.mark.asyncio
async def test_uppercase_basic():
"""Test basic uppercase conversion."""
tools = await mount(coordinator=None, config={})
result = await tools["uppercase"]("hello")
assert result == "HELLO"
@pytest.mark.asyncio
async def test_uppercase_empty():
"""Test empty string."""
tools = await mount(coordinator=None, config={})
result = await tools["uppercase"]("")
assert result == ""
def test_get_schema():
"""Test schema is valid."""
schema = get_schema()
assert "uppercase" in schema
assert "description" in schema["uppercase"]
assert "parameters" in schema["uppercase"]
# Quick test with environment variable
export AMPLIFIER_MODULE_TOOL_UPPERCASE=$(pwd)
python -c "from amplifier_foundation import load_bundle; import asyncio; asyncio.run(load_bundle('./profile.md'))"
# Or run tests
uv pip install pytest pytest-asyncio
uv run pytest tests/
git init
git add .
git commit -m "feat: initial uppercase tool module"
gh repo create amplifier-module-tool-uppercase --public
git push -u origin main
git tag v0.1.0
git push origin v0.1.0
Reference it in a profile:
tools:
- git+https://github.com/yourusername/amplifier-module-tool-uppercase.git@v0.1.0
Follow this workflow for all module development:
Ask: "What is the ONE thing this module does?"
Good examples:
Bad examples (too broad):
Document the public interface BEFORE writing code:
# amplifier-module-tool-myfeature
Converts X to Y using Z algorithm.
## Installation
\`\`\`bash
uv pip install git+https://github.com/you/amplifier-module-tool-myfeature.git
\`\`\`
## API
### mount(coordinator, config) -> dict
Returns dict with these functions:
- `my_function(arg1: str, arg2: int) -> str`: Does X and returns Y
## Configuration
\`\`\`yaml
config:
option1: value1
option2: value2
\`\`\`
## Testing
\`\`\`bash
pytest tests/
\`\`\`
amplifier-module-{type}-{name}/
├── amplifier_module_{type}_{name}/
│ ├── __init__.py # mount() and public functions
│ ├── _internal.py # Private implementation
│ └── py.typed # Type hints marker
├── tests/
│ ├── conftest.py # Shared fixtures
│ ├── test_unit.py # Unit tests (60%)
│ ├── test_integration.py # Integration tests (30%)
│ └── test_e2e.py # End-to-end tests (10%)
├── pyproject.toml # Dependencies and entry point
├── README.md # Public contract
└── .github/
└── workflows/
└── test.yml # CI/CD
All modules implement the mount() protocol:
async def mount(coordinator: Any, config: dict) -> dict[str, Any]:
"""Mount the module and return its public interface.
Args:
coordinator: The amplifier coordinator instance
config: Configuration dict from profile/bundle
Returns:
Dict mapping names to functions/objects
"""
# Setup (load resources, connect to services, etc.)
# Define public functions
async def my_function(arg: str) -> str:
# Implementation
pass
# Return public interface
return {
"my_function": my_function
}
For tool modules, also implement get_schema():
def get_schema() -> dict:
"""Return JSON schema for tool functions.
Returns:
Dict mapping function names to schemas
"""
return {
"my_function": {
"description": "Does something useful",
"parameters": {
"type": "object",
"properties": {
"arg": {"type": "string", "description": "Input value"}
},
"required": ["arg"]
}
}
}
Unit tests (60%): Test individual functions in isolation
@pytest.mark.asyncio
async def test_my_function_basic():
tools = await mount(coordinator=None, config={})
result = await tools["my_function"]("input")
assert result == "expected"
Integration tests (30%): Test module with real dependencies
@pytest.mark.asyncio
async def test_with_real_coordinator():
from amplifier_foundation import Coordinator
coordinator = Coordinator()
tools = await mount(coordinator=coordinator, config={})
# Test with real coordinator
End-to-end tests (10%): Test full workflows
@pytest.mark.asyncio
async def test_full_agent_workflow():
# Load bundle, create session, execute turn
pass
git init
git add .
git commit -m "feat: initial module implementation"
gh repo create amplifier-module-{type}-{name} --public
git push -u origin main
git tag v0.1.0
git push origin v0.1.0
# profile.md or bundle.md
tools:
- git+https://github.com/yourusername/amplifier-module-tool-myfeature.git@v0.1.0
amplifier-module-{type}-{name}/
├── amplifier_module_{type}_{name}/ # Package (underscores)
│ ├── __init__.py # Public interface (mount, get_schema)
│ ├── _internal.py # Private implementation
│ ├── _types.py # Private type definitions
│ └── py.typed # Type hints marker file
├── tests/ # Test package
│ ├── __init__.py
│ ├── conftest.py # Pytest fixtures
│ ├── test_unit.py # Unit tests
│ ├── test_integration.py # Integration tests
│ └── test_e2e.py # End-to-end tests
├── .github/
│ └── workflows/
│ └── test.yml # CI/CD workflow
├── pyproject.toml # Project metadata
├── README.md # Public contract
├── LICENSE # MIT recommended
├── .gitignore
└── .python-version # Python version (3.11+)
The pyproject.toml file declares dependencies and registers the module:
[project]
name = "amplifier-module-{type}-{name}"
version = "0.1.0"
description = "One sentence description"
readme = "README.md"
requires-python = ">=3.11"
license = { text = "MIT" }
authors = [
{ name = "Your Name", email = "your.email@example.com" }
]
dependencies = [
"amplifier-foundation>=0.1.0",
]
# Entry point registration - CRITICAL
[project.entry-points."amplifier.{type}s"] # Note plural
{name} = "amplifier_module_{type}_{name}:mount"
# Optional: Additional entry points for get_schema (tools only)
[project.entry-points."amplifier.tool_schemas"]
{name} = "amplifier_module_{type}_{name}:get_schema"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = "test_*.py"
python_functions = "test_*"
asyncio_mode = "auto"
[tool.coverage.run]
source = ["amplifier_module_{type}_{name}"]
omit = ["tests/*"]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"raise AssertionError",
"raise NotImplementedError",
"if __name__ == .__main__.:",
"if TYPE_CHECKING:",
]
Public Interface (in __init__.py):
mount() function - REQUIREDget_schema() function - REQUIRED for tools# amplifier_module_tool_myfeature/__init__.py
from typing import Any
async def mount(coordinator: Any, config: dict) -> dict[str, Any]:
"""Public mount function - this is a STUD."""
from ._internal import MyFeatureImpl
impl = MyFeatureImpl(config)
return {"my_function": impl.execute}
def get_schema() -> dict:
"""Public schema function - this is a STUD."""
return {"my_function": {...}}
# Public constants
DEFAULT_TIMEOUT = 30
Private Implementation (in _internal.py):
_# amplifier_module_tool_myfeature/_internal.py
class MyFeatureImpl:
"""Private implementation - this is a BRICK."""
def __init__(self, config: dict):
self._config = config
self._cache = {}
async def execute(self, input: str) -> str:
# Implementation details hidden
return self._process(input)
def _process(self, input: str) -> str:
# Private helper
pass
# tests/conftest.py - Shared fixtures
import pytest
@pytest.fixture
async def mounted_module():
"""Fixture that mounts the module."""
from amplifier_module_tool_myfeature import mount
return await mount(coordinator=None, config={})
@pytest.fixture
def sample_config():
"""Fixture for test configuration."""
return {"option1": "value1"}
# tests/test_unit.py - Unit tests (60%)
import pytest
@pytest.mark.asyncio
async def test_basic_function(mounted_module):
"""Test basic functionality."""
result = await mounted_module["my_function"]("input")
assert result == "expected"
@pytest.mark.asyncio
async def test_error_handling(mounted_module):
"""Test error cases."""
with pytest.raises(ValueError):
await mounted_module["my_function"]("")
# tests/test_integration.py - Integration tests (30%)
import pytest
@pytest.mark.asyncio
async def test_with_coordinator():
"""Test with real coordinator."""
from amplifier_foundation import Coordinator
from amplifier_module_tool_myfeature import mount
coordinator = Coordinator()
tools = await mount(coordinator, config={})
# Test integration
# tests/test_e2e.py - End-to-end tests (10%)
import pytest
@pytest.mark.asyncio
async def test_full_workflow():
"""Test complete agent workflow."""
from amplifier_foundation import load_bundle, create_session
# Test end-to-end
Every module MUST include:
README.md with:
Inline docstrings:
async def mount(coordinator: Any, config: dict) -> dict[str, Any]:
"""Mount the uppercase tool.
Args:
coordinator: The amplifier coordinator instance
config: Configuration dictionary (unused for this tool)
Returns:
Dictionary mapping "uppercase" to the uppercase function
Examples:
>>> tools = await mount(coordinator, {})
>>> result = await tools["uppercase"]("hello")
>>> print(result)
HELLO
"""
Amplifier modules follow the test pyramid:
/\
/ \
/ E2E \ 10% - Full workflows
/------\
/ \
/ Integrn \ 30% - Module + dependencies
/------------\
/ \
/ Unit Tests \ 60% - Individual functions
------------------
60% Unit Tests: Fast, isolated, test individual functions
30% Integration Tests: Test module with real dependencies
10% End-to-End Tests: Test full agent workflows
| Coverage Level | Target | Applies To |
|---|---|---|
| Minimum | 70% | All modules before publishing |
| Target | 85% | Production modules |
| Critical Paths | 100% | Error handling, security, data loss |
Measuring coverage:
uv pip install pytest-cov
uv run pytest --cov=amplifier_module_tool_myfeature --cov-report=html
open htmlcov/index.html
All amplifier modules are async, so use pytest-asyncio:
import pytest
@pytest.mark.asyncio
async def test_async_function():
"""Test an async function."""
result = await my_async_function()
assert result == expected
Or configure pytest to auto-detect async tests:
# pyproject.toml
[tool.pytest.ini_options]
asyncio_mode = "auto" # Automatically handle async tests
Then write tests without @pytest.mark.asyncio:
async def test_async_function():
"""Automatically recognized as async test."""
result = await my_async_function()
assert result == expected
Use pytest-mock or unittest.mock to mock external services:
import pytest
from unittest.mock import AsyncMock, patch
@pytest.mark.asyncio
async def test_with_mocked_api():
"""Test with mocked external API."""
mock_response = {"data": "value"}
with patch('my_module._internal.external_api_call', new_callable=AsyncMock) as mock_api:
mock_api.return_value = mock_response
result = await my_function()
assert result == expected
mock_api.assert_called_once()
Add GitHub Actions workflow:
# .github/workflows/test.yml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12"]
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v1
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: uv pip install -e ".[dev]"
- name: Run tests with coverage
run: uv run pytest --cov --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
"Only reference declared dependencies"
Modules can ONLY import and use:
pyproject.tomlmount()Modules CANNOT:
# ❌ WRONG - Direct import of peer module
from amplifier_module_tool_filesystem import read_file
# ✅ CORRECT - Use tool through coordinator
async def mount(coordinator, config):
async def my_function():
# Request tool from coordinator
filesystem = await coordinator.get_tool("filesystem")
content = await filesystem["read_file"]("path.txt")
return content
Repository names (kebab-case):
amplifier-module-tool-filesystem
amplifier-module-hook-logging
amplifier-module-provider-openai
amplifier-module-context-memory
amplifier-module-loop-streaming
Python packages (snake_case):
amplifier_module_tool_filesystem
amplifier_module_hook_logging
amplifier_module_provider_openai
Entry point names (kebab-case or snake_case):
[project.entry-points."amplifier.tools"]
filesystem = "amplifier_module_tool_filesystem:mount"
my-tool = "amplifier_module_tool_mytool:mount"
The README.md is a contract:
If it's not in the README, it's not public API.
For stateless transformations:
async def mount(coordinator, config):
"""Mount a simple transformation tool."""
async def transform(input: str) -> str:
"""Transform input to output."""
# Pure function - no state
return input.upper()
return {"transform": transform}
For stateful services:
class MyService:
"""Internal service with state."""
def __init__(self, config: dict):
self.api_key = config.get("api_key")
self._cache = {}
async def call_api(self, query: str) -> dict:
"""Make API call with caching."""
if query in self._cache:
return self._cache[query]
result = await self._make_request(query)
self._cache[query] = result
return result
async def _make_request(self, query: str) -> dict:
# Private implementation
pass
async def mount(coordinator, config):
"""Mount service with state."""
service = MyService(config)
return {
"call_api": service.call_api
}
For processing streams of data:
async def mount(coordinator, config):
"""Mount batch processing tool."""
batch_size = config.get("batch_size", 10)
async def process_batch(items: list[str]) -> list[str]:
"""Process items in batches."""
results = []
for i in range(0, len(items), batch_size):
batch = items[i:i + batch_size]
batch_results = await _process_items(batch)
results.extend(batch_results)
return results
async def _process_items(items: list[str]) -> list[str]:
# Process batch concurrently
import asyncio
tasks = [_process_one(item) for item in items]
return await asyncio.gather(*tasks)
async def _process_one(item: str) -> str:
# Process single item
return item.upper()
return {"process_batch": process_batch}
All modules should handle errors consistently:
class ModuleError(Exception):
"""Base exception for module errors."""
pass
class ConfigurationError(ModuleError):
"""Invalid configuration."""
pass
class ExecutionError(ModuleError):
"""Error during execution."""
pass
async def mount(coordinator, config):
"""Mount with proper error handling."""
# Validate config at mount time
if "required_key" not in config:
raise ConfigurationError("Missing required_key in config")
async def my_function(input: str) -> str:
"""Function with error handling."""
if not input:
raise ValueError("Input cannot be empty")
try:
result = await _do_work(input)
return result
except Exception as e:
# Wrap external errors
raise ExecutionError(f"Failed to process: {e}") from e
return {"my_function": my_function}
The modular-builder is an AI agent that generates module scaffold code based on specifications. It helps you:
Important: modular-builder generates starting points, not production code. Always review and test generated code.
Use modular-builder when:
Write manually when:
Create a specification file:
# module-spec.yaml
name: uppercase
type: tool
description: Convert text to uppercase
functions:
- name: uppercase
description: Convert text to uppercase
parameters:
text:
type: string
description: The text to convert
returns:
type: string
description: The uppercased text
dependencies:
- amplifier-foundation
test_coverage: 85
Invoke the agent:
modular-builder generate --spec module-spec.yaml --output ./amplifier-module-tool-uppercase
After generation, always:
pytest tests/pytest --covIf generated code isn't quite right:
--force flaggit diffFor detailed information on specific topics, see the references/ directory:
→ references/DEVELOPMENT_WORKFLOW.md
→ references/REPOSITORY_RULES.md
→ references/MODULAR_BUILDER.md
Let's build a complete tool module from scratch that searches text files for patterns.
"Search text files in a directory for regex patterns and return matching lines with line numbers."
# amplifier-module-tool-textsearch
Search text files for regex patterns with line number reporting.
## Installation
\`\`\`bash
uv pip install git+https://github.com/yourusername/amplifier-module-tool-textsearch.git
\`\`\`
## API
### mount(coordinator, config) -> dict
Returns dict with:
- `search_files(directory: str, pattern: str, file_ext: str = ".txt") -> list[dict]`
- Search files in directory for regex pattern
- Returns list of matches with file, line number, and content
## Configuration
\`\`\`yaml
config:
max_file_size: 1048576 # 1MB default
encoding: utf-8
\`\`\`
## Example
\`\`\`python
tools = await mount(coordinator, config={"max_file_size": 2097152})
results = await tools["search_files"](
directory="./logs",
pattern="ERROR.*authentication",
file_ext=".log"
)
\`\`\`
mkdir -p amplifier-module-tool-textsearch
cd amplifier-module-tool-textsearch
mkdir -p amplifier_module_tool_textsearch tests
# amplifier_module_tool_textsearch/__init__.py
"""Text search tool module."""
from typing import Any
import re
from pathlib import Path
class SearchError(Exception):
"""Base exception for search errors."""
pass
async def mount(coordinator: Any, config: dict) -> dict[str, Any]:
"""Mount the text search tool.
Args:
coordinator: The amplifier coordinator
config: Configuration with optional max_file_size and encoding
Returns:
Dict with search_files function
"""
max_size = config.get("max_file_size", 1048576) # 1MB default
encoding = config.get("encoding", "utf-8")
async def search_files(
directory: str,
pattern: str,
file_ext: str = ".txt"
) -> list[dict]:
"""Search files for regex pattern.
Args:
directory: Directory path to search
pattern: Regex pattern to match
file_ext: File extension filter (default .txt)
Returns:
List of dicts with {file, line_num, line, match}
Raises:
SearchError: If directory doesn't exist or pattern is invalid
"""
dir_path = Path(directory)
if not dir_path.exists():
raise SearchError(f"Directory not found: {directory}")
if not dir_path.is_dir():
raise SearchError(f"Not a directory: {directory}")
try:
regex = re.compile(pattern)
except re.error as e:
raise SearchError(f"Invalid regex pattern: {e}")
results = []
for file_path in dir_path.rglob(f"*{file_ext}"):
if file_path.is_file() and file_path.stat().st_size <= max_size:
results.extend(await _search_file(file_path, regex, encoding))
return results
async def _search_file(
file_path: Path,
regex: re.Pattern,
encoding: str
) -> list[dict]:
"""Search a single file."""
results = []
try:
with open(file_path, 'r', encoding=encoding) as f:
for line_num, line in enumerate(f, start=1):
match = regex.search(line)
if match:
results.append({
"file": str(file_path),
"line_num": line_num,
"line": line.rstrip(),
"match": match.group(0)
})
except Exception as e:
# Skip files that can't be read
pass
return results
return {
"search_files": search_files
}
def get_schema() -> dict:
"""Return JSON schema for tool functions."""
return {
"search_files": {
"description": "Search text files for regex patterns",
"parameters": {
"type": "object",
"properties": {
"directory": {
"type": "string",
"description": "Directory path to search"
},
"pattern": {
"type": "string",
"description": "Regex pattern to match"
},
"file_ext": {
"type": "string",
"description": "File extension filter (default .txt)",
"default": ".txt"
}
},
"required": ["directory", "pattern"]
}
}
}
# tests/conftest.py
import pytest
from pathlib import Path
import tempfile
import shutil
@pytest.fixture
async def mounted_search():
"""Mount the search module."""
from amplifier_module_tool_textsearch import mount
return await mount(coordinator=None, config={})
@pytest.fixture
def temp_dir():
"""Create temporary directory with test files."""
tmpdir = tempfile.mkdtemp()
# Create test files
(Path(tmpdir) / "file1.txt").write_text("line 1: ERROR\nline 2: OK\nline 3: ERROR")
(Path(tmpdir) / "file2.txt").write_text("line 1: WARNING\nline 2: ERROR")
(Path(tmpdir) / "subdir").mkdir()
(Path(tmpdir) / "subdir" / "file3.txt").write_text("line 1: ERROR in subdirectory")
yield tmpdir
shutil.rmtree(tmpdir)
# tests/test_unit.py
import pytest
from amplifier_module_tool_textsearch import mount, SearchError
@pytest.mark.asyncio
async def test_search_basic(mounted_search, temp_dir):
"""Test basic search functionality."""
results = await mounted_search["search_files"](
directory=temp_dir,
pattern="ERROR"
)
assert len(results) == 4 # 3 in file1, 1 in file2, 0 in file3 (different ext)
assert all(r["match"] == "ERROR" for r in results)
@pytest.mark.asyncio
async def test_search_subdirectories(mounted_search, temp_dir):
"""Test recursive search in subdirectories."""
results = await mounted_search["search_files"](
directory=temp_dir,
pattern="ERROR"
)
# Should find matches in subdirectories too
assert any("subdir" in r["file"] for r in results)
@pytest.mark.asyncio
async def test_invalid_directory(mounted_search):
"""Test error handling for invalid directory."""
with pytest.raises(SearchError, match="Directory not found"):
await mounted_search["search_files"](
directory="/nonexistent",
pattern="ERROR"
)
@pytest.mark.asyncio
async def test_invalid_regex(mounted_search, temp_dir):
"""Test error handling for invalid regex."""
with pytest.raises(SearchError, match="Invalid regex pattern"):
await mounted_search["search_files"](
directory=temp_dir,
pattern="[invalid"
)
@pytest.mark.asyncio
async def test_file_extension_filter(mounted_search, temp_dir):
"""Test file extension filtering."""
# Create .log file
from pathlib import Path
(Path(temp_dir) / "test.log").write_text("ERROR in log file")
# Search only .log files
results = await mounted_search["search_files"](
directory=temp_dir,
pattern="ERROR",
file_ext=".log"
)
assert len(results) == 1
assert results[0]["file"].endswith(".log")
# Run tests
uv pip install pytest pytest-asyncio
uv run pytest tests/ -v
# Check coverage
uv run pytest --cov=amplifier_module_tool_textsearch --cov-report=term
# Create pyproject.toml
cat > pyproject.toml << 'EOF'
[project]
name = "amplifier-module-tool-textsearch"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = ["amplifier-foundation"]
[project.entry-points."amplifier.tools"]
textsearch = "amplifier_module_tool_textsearch:mount"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
EOF
# Initialize git and publish
git init
git add .
git commit -m "feat: initial text search tool module"
gh repo create amplifier-module-tool-textsearch --public
git push -u origin main
git tag v0.1.0
git push origin v0.1.0
# profile.md
---
tools:
- git+https://github.com/yourusername/amplifier-module-tool-textsearch.git@v0.1.0
---
# Profile with Text Search
This profile includes the text search tool.
| Anti-Pattern | Why It's Bad | Correct Pattern |
|---|---|---|
| Importing peer modules | Creates hidden dependencies, breaks isolation | Use coordinator to get other modules |
| Storing state in module globals | Not thread-safe, breaks on reload | Store state in class instances |
| Returning classes from mount() | Exposes implementation details | Return dict of functions |
| No error handling | Crashes agent on bad input | Validate inputs, raise clear errors |
| Testing implementation | Tests break on refactor | Test behavior through public API |
| Undocumented config options | Users can't configure module | Document all config in README |
| Blocking I/O | Hangs async loop | Use async I/O or run_in_executor |
| Large mount() functions | Hard to test and maintain | Extract to _internal.py |
Bad: Direct import creates coupling
# ❌ WRONG
from amplifier_module_tool_filesystem import read_file
async def mount(coordinator, config):
async def process_file(path: str):
content = read_file(path) # Tightly coupled
return content.upper()
return {"process_file": process_file}
Good: Use coordinator for loose coupling
# ✅ CORRECT
async def mount(coordinator, config):
async def process_file(path: str):
# Loose coupling through coordinator
fs = await coordinator.get_tool("filesystem")
content = await fs["read_file"](path)
return content.upper()
return {"process_file": process_file}
Bad: Exposing implementation details
# ❌ WRONG - Exposes internal class
class SearchService:
def __init__(self):
self.cache = {}
async def mount(coordinator, config):
service = SearchService()
return {"search_service": service} # Leaks internals
Good: Hide implementation behind functions
# ✅ CORRECT - Hide implementation
class _SearchService: # Private class
def __init__(self):
self._cache = {}
async def search(self, query: str) -> list:
# Implementation
async def mount(coordinator, config):
service = _SearchService()
return {"search": service.search} # Only expose function
Bad: Code and docs don't match
# README.md says: uppercase(text: str) -> str
# But code has:
async def uppercase(text: str, mode: str = "upper") -> str:
# Undocumented parameter!
Good: Docs match implementation exactly
# README.md: uppercase(text: str, mode: str = "upper") -> str
# Code:
async def uppercase(text: str, mode: str = "upper") -> str:
"""Convert text case.
Args:
text: Input text
mode: "upper" or "lower" (default: "upper")
"""
Bad: Testing internal details
# ❌ WRONG - Tests implementation
def test_internal_cache():
service = SearchService()
service._cache["key"] = "value" # Testing private state
assert service._cache["key"] == "value"
Good: Test behavior through public API
# ✅ CORRECT - Tests behavior
@pytest.mark.asyncio
async def test_search_returns_results():
tools = await mount(coordinator=None, config={})
results = await tools["search"]("query")
assert isinstance(results, list)
Building amplifier-foundation modules is about creating self-contained, regeneratable units with clear public interfaces and hidden implementation details.
Key Takeaways:
Next Steps:
Happy building! 🔧