From agent-almanac
Builds a Feast feature store for centralized ML feature management with offline/online stores (Postgres/Redis/BigQuery/DynamoDB), feature views, transformations, and point-in-time joins. Use for training-serving consistency and real-time inference.
npx claudepluginhub pjt222/agent-almanacThis skill is limited to using the following tools:
> See [Extended Examples](references/EXAMPLES.md) for complete configuration files and templates.
Executes feature store connector operations and generates configurations for ML deployment including MLOps pipelines, model serving, monitoring, and production optimization.
Designs production ML systems from data ingestion and feature stores to model training, serving, and monitoring. Use for ML pipelines, MLOps infrastructure, and system design interviews.
Executes Databricks ML workflow: Feature Store engineering, MLflow training/tracking, Unity Catalog registry, Mosaic AI serving for production inference.
Share bugs, ideas, or general feedback.
See Extended Examples for complete configuration files and templates.
Implement centralized feature management with Feast for consistent feature serving across training and inference.
Set up Feast project structure and configure storage backends.
# Install Feast with required extras
pip install 'feast[redis,postgres]' # Add backends as needed
# Initialize new feature repository
feast init my_feature_repo
cd my_feature_repo
# Directory structure created:
# my_feature_repo/
# ├── feature_store.yaml # Configuration
# ├── features.py # Feature definitions
# └── data/ # Sample data (dev only)
Configure feature_store.yaml:
# feature_store.yaml
project: customer_analytics
registry: data/registry.db # SQLite for dev, use S3/GCS for prod
provider: local
# Offline store for training data
offline_store:
type: postgres
# ... (see EXAMPLES.md for complete implementation)
Production configuration with cloud backends:
# feature_store.prod.yaml
project: customer_analytics
registry: s3://feast-registry/prod/registry.db
provider: aws
offline_store:
type: bigquery
project_id: my-gcp-project
# ... (see EXAMPLES.md for complete implementation)
Expected: Feast repository initialized with config file, sample feature definitions created, offline and online stores configured, registry path accessible.
On failure: Verify database/Redis credentials (psql -U feast_user -h localhost), check connection strings format, ensure databases exist (CREATE DATABASE feature_store), verify cloud permissions for S3/BigQuery/DynamoDB, test connectivity to storage backends, check Feast version compatibility with backends (feast version).
Create entity definitions and connect to raw data sources.
# entities.py
from feast import Entity, ValueType
# Define entities (primary keys for features)
customer = Entity(
name="customer",
description="Customer entity",
value_type=ValueType.INT64,
# ... (see EXAMPLES.md for complete implementation)
Define data sources:
# data_sources.py
from feast import FileSource, BigQuerySource, RedshiftSource
from feast.data_format import ParquetFormat
from datetime import timedelta
# Development: File-based source
customer_transactions_source = FileSource(
path="data/customer_transactions.parquet",
# ... (see EXAMPLES.md for complete implementation)
Expected: Entity definitions reference correct ID columns, data sources connect to raw data successfully, event_timestamp_column exists in source data, created_timestamp_column allows point-in-time queries.
On failure: Verify source data files exist and are readable, check BigQuery/Redshift credentials and table access, ensure timestamp columns have correct format (Unix timestamp or ISO8601), verify Kafka connectivity and topic existence, check schema compatibility between sources and entities.
Create feature views that define how raw data becomes ML-ready features.
# feature_views.py
from feast import FeatureView, Field
from feast.types import Float32, Int64, String, Bool
from datetime import timedelta
from entities import customer, product
from data_sources import customer_features_source
# Simple feature view without transformations
# ... (see EXAMPLES.md for complete implementation)
Expected: Feature views registered successfully, schema matches source data, transformations execute without errors, TTL values appropriate for use case, on-demand views combine batch and request features.
On failure: Verify field names match source columns exactly, check dtype compatibility (Int64 vs Int32), ensure entity references exist, validate transformation logic with sample data, check for division by zero in calculations, verify request source schema matches inference payload.
Deploy feature definitions to registry and materialize to online store.
# Apply feature definitions to registry
feast apply
# Expected output:
# Created entity customer
# Created feature view customer_stats
# Created on demand feature view customer_segments
# ... (see EXAMPLES.md for complete implementation)
Programmatic materialization:
# materialize_features.py
from feast import FeatureStore
from datetime import datetime, timedelta
# Initialize feature store
fs = FeatureStore(repo_path=".")
# Materialize all feature views
# ... (see EXAMPLES.md for complete implementation)
Expected: Feature definitions applied to registry without conflicts, materialization job completes successfully, online store populated with features, feature freshness within configured TTL.
On failure: Check offline store query succeeds (feast feature-views describe customer_stats), verify time range has data, ensure online store writable (Redis/DynamoDB permissions), check for duplicate feature names across views, verify entity keys exist in source data, monitor materialization job logs for errors, check disk space for local stores.
Fetch point-in-time correct historical features for model training.
# get_training_data.py
from feast import FeatureStore
import pandas as pd
from datetime import datetime
# Initialize feature store
fs = FeatureStore(repo_path=".")
# ... (see EXAMPLES.md for complete implementation)
Point-in-time correctness validation:
# validate_pit_correctness.py
import pandas as pd
from datetime import datetime, timedelta
def validate_point_in_time_correctness(training_df, entity_df):
"""
Ensure features don't leak future information.
"""
# ... (see EXAMPLES.md for complete implementation)
Expected: Historical features retrieved successfully, entity_df timestamps preserved, no NaN values for materialized features, point-in-time correctness guaranteed (no future data leakage), feature service groups features logically.
On failure: Check entity_df has required columns (entity names + event_timestamp), verify feature view names match registry, ensure offline store has data for requested time range, check for timezone mismatches (use UTC), verify entity IDs exist in source data, inspect logs for SQL query errors, validate feature view TTL covers requested time range.
Retrieve low-latency features from online store for model serving.
# serve_features.py
from feast import FeatureStore
import time
# Initialize feature store
fs = FeatureStore(repo_path=".")
def get_inference_features(customer_ids: list, request_data: dict = None):
# ... (see EXAMPLES.md for complete implementation)
FastAPI integration:
# api.py
from fastapi import FastAPI
from pydantic import BaseModel
from feast import FeatureStore
import mlflow
app = FastAPI()
fs = FeatureStore(repo_path=".")
# ... (see EXAMPLES.md for complete implementation)
Expected: Online features retrieved in <10ms for single entity, batch retrieval scales efficiently, on-demand transformations execute correctly, request-time features merged with batch features, API responds quickly (<50ms end-to-end).
On failure: Check online store populated (run materialize if empty), verify Redis/DynamoDB connectivity and latency, ensure entity keys exist in online store, check for cold start issues (warm up cache), verify on-demand transformation logic, monitor online store memory/CPU usage, check network latency between service and online store.
track-ml-experiments - Log feature metadata in MLflow experimentsorchestrate-ml-pipeline - Schedule feature materialization jobsversion-ml-data - Version raw data sources for feature engineeringdeploy-ml-model-serving - Integrate feature store with model servingserialize-data-formats - Choose efficient storage formats for featuresdesign-serialization-schema - Design schemas for feature sources