From harness-claude
Migrates monoliths to microservices incrementally using Strangler Fig pattern with facade routing, feature flags, and parallel-run validation for zero-downtime feature extraction.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> Migrate monoliths incrementally using the strangler fig pattern with facade routing.
Designs microservice boundaries using DDD bounded contexts, functional cohesion, and strategies like subdomain decomposition, business capabilities, and Strangler Fig for monolith migrations.
Designs incremental migration strategies for legacy codebases: identifies service boundaries, produces dependency maps, roadmaps, and API facades. Use for strangler fig, monolith decomposition, framework upgrades.
Designs microservices architectures using patterns for service boundaries, synchronous/asynchronous communication, data management, and resilience. For decomposing monoliths and building distributed systems.
Share bugs, ideas, or general feedback.
Migrate monoliths incrementally using the strangler fig pattern with facade routing.
Phase 1: Add a facade (gateway) in front of the monolith:
// The facade routes ALL traffic — monolith handles everything initially
// Gradually, you move routes to new services one by one
import express from 'express';
import { createProxyMiddleware } from 'http-proxy-middleware';
const app = express();
const MONOLITH_URL = process.env.MONOLITH_URL!;
// Flag store to control which routes go to which target
class FeatureRouter {
private flags: Map<string, boolean>;
constructor() {
this.flags = new Map([
['use-new-catalog-service', false],
['use-new-user-service', false],
['use-new-payment-service', false],
]);
}
isEnabled(flag: string): boolean {
return this.flags.get(flag) ?? false;
}
// Loaded from DB/config at runtime — allows instant rollback
async reload(): Promise<void> {
const config = await db.featureFlags.findMany();
for (const { key, enabled } of config) {
this.flags.set(key, enabled);
}
}
}
const router = new FeatureRouter();
// Catalog routes — gradually migrated
app.use('/api/catalog', async (req, res, next) => {
if (router.isEnabled('use-new-catalog-service')) {
return createProxyMiddleware({
target: process.env.CATALOG_SERVICE_URL,
changeOrigin: true,
pathRewrite: { '^/api/catalog': '' },
})(req, res, next);
}
// Falls through to monolith proxy below
next();
});
// Everything else goes to the monolith
app.use(
'/',
createProxyMiddleware({
target: MONOLITH_URL,
changeOrigin: true,
})
);
Phase 2: Extract a service with parallel-run validation:
// Before full cutover, run both implementations and compare
app.get('/api/catalog/products/:id', async (req, res) => {
const [monolithResult, newServiceResult] = await Promise.allSettled([
fetch(`${MONOLITH_URL}/products/${req.params.id}`).then((r) => r.json()),
fetch(`${CATALOG_SERVICE_URL}/products/${req.params.id}`).then((r) => r.json()),
]);
if (monolithResult.status === 'fulfilled' && newServiceResult.status === 'fulfilled') {
const diff = deepDiff(monolithResult.value, newServiceResult.value);
if (diff.length > 0) {
logger.warn('Response mismatch', { productId: req.params.id, diff });
metrics.increment('strangler.response_mismatch', { route: 'product_detail' });
}
}
// Always return monolith response during parallel run
if (monolithResult.status === 'fulfilled') {
res.json(monolithResult.value);
} else {
res.status(500).json({ error: 'Failed' });
}
});
Phase 3: Full cutover with instant rollback:
// Feature flag controls routing — flip it to cut over
app.use(
'/api/catalog',
dynamicRouter(async (req, res, next) => {
const enabled = await featureFlags.get('use-new-catalog-service');
if (enabled) {
proxyToService(CATALOG_SERVICE_URL)(req, res, next);
} else {
proxyToService(MONOLITH_URL)(req, res, next);
}
})
);
// Rollback = flip the flag back
// No deployment needed
Migration checklist per feature:
const MIGRATION_STEPS = [
'1. Identify the feature to extract (bounded context)',
'2. Create the new service with its own database',
'3. Set up data sync (dual-write or ETL) from monolith DB to new DB',
'4. Deploy the facade in front of the monolith',
'5. Run in parallel — route to both, compare responses',
'6. Validate parity (no response diffs, same performance)',
'7. Enable feature flag — route to new service',
'8. Monitor for 1-2 weeks with instant rollback available',
'9. Disable data sync from monolith',
'10. Delete monolith code for this feature',
];
Database migration strategy:
// During migration: dual-write to keep both DBs in sync
async function createProduct(data: CreateProductInput): Promise<Product> {
// Write to monolith DB (source of truth during migration)
const product = await monolithDb.query('INSERT INTO products ... RETURNING *', [
data.name,
data.price,
]);
// Also write to new catalog service's DB (eventually will be primary)
await catalogDb.product.create({ data: { ...product, migrated: true } }).catch((err) => {
logger.error('Dual-write to catalog DB failed', { productId: product.id, err });
// Don't fail the request — monolith is still source of truth
});
return product;
}
Strangler Fig metaphor: The strangler fig tree grows around a host tree, eventually replacing it. You build the new system around the old one, gradually replacing it piece by piece until the old system is gone.
What to extract first:
Anti-patterns:
Seam finding: Look for natural boundaries in the monolith:
/catalog/, /orders/)microservices.io/patterns/refactoring/strangler-application.html