From harness-claude
Records application metrics with OpenTelemetry counters, histograms, and gauges for tracking request rates, error rates, latency distributions, and business metrics in monitoring setups.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> Record application metrics with OpenTelemetry counters, histograms, and gauges for monitoring and alerting
Sets up OpenTelemetry tracing/metrics with OTLP export, PostgreSQL/HTTP instrumentation, custom spans, and Prometheus metrics in Node.js services.
Configures OpenTelemetry observability in .NET 10 apps for traces, metrics, logs with OTLP export, custom ActivitySource/IMeterFactory metrics, resource setup, and Aspire Dashboard integration.
Guides OpenTelemetry instrumentation setup for traces, metrics, logs including spans, resources, SDKs for Node.js, Python, Java, Go, .NET, Ruby, PHP, Next.js, browser, and Kubernetes best practices.
Share bugs, ideas, or general feedback.
Record application metrics with OpenTelemetry counters, histograms, and gauges for monitoring and alerting
metrics.getMeter('service-name', '1.0.0').http.server.request.duration, http.server.active_requests.// telemetry/metrics.ts
import { metrics, ValueType } from '@opentelemetry/api';
const meter = metrics.getMeter('order-service', '1.0.0');
// Counter — monotonically increasing (total requests, errors)
export const orderCounter = meter.createCounter('orders.created', {
description: 'Total number of orders created',
unit: '1',
});
export const orderErrorCounter = meter.createCounter('orders.errors', {
description: 'Total number of order creation errors',
unit: '1',
});
// Histogram — distribution of values (latency, request size)
export const orderDurationHistogram = meter.createHistogram('orders.duration', {
description: 'Order creation duration',
unit: 'ms',
valueType: ValueType.DOUBLE,
});
// UpDownCounter — can increase and decrease (active connections, queue depth)
export const activeOrdersGauge = meter.createUpDownCounter('orders.active', {
description: 'Number of orders currently being processed',
unit: '1',
});
// Observable gauge — value is read on collection (memory, CPU)
meter.createObservableGauge(
'process.memory.heap',
{
description: 'Heap memory usage',
unit: 'By',
},
(result) => {
result.observe(process.memoryUsage().heapUsed);
}
);
// Usage in service code
export async function createOrder(userId: string, items: OrderItem[]): Promise<Order> {
const startTime = performance.now();
activeOrdersGauge.add(1, { 'order.type': 'standard' });
try {
const order = await db.orders.create({ userId, items });
orderCounter.add(1, {
'order.type': 'standard',
'order.status': 'created',
'payment.method': order.paymentMethod,
});
return order;
} catch (error) {
orderErrorCounter.add(1, {
'error.type': error instanceof Error ? error.constructor.name : 'unknown',
});
throw error;
} finally {
activeOrdersGauge.add(-1, { 'order.type': 'standard' });
orderDurationHistogram.record(performance.now() - startTime, {
'order.type': 'standard',
});
}
}
Instrument types:
| Instrument | Type | Example |
|---|---|---|
| Counter | Monotonic sum | Total requests, bytes sent |
| UpDownCounter | Non-monotonic sum | Active connections, queue depth |
| Histogram | Distribution | Request duration, response size |
| Observable Counter | Async monotonic sum | CPU time |
| Observable UpDownCounter | Async non-monotonic | Thread count |
| Observable Gauge | Async point-in-time | Temperature, memory usage |
Attribute cardinality: Each unique combination of attributes creates a separate time series. With 10 status codes and 5 methods, you get 50 time series. Adding user ID (100K users) would create 5 million time series — do not do this.
Recommended metric names:
http.server.request.duration — request latency histogramhttp.server.active_requests — concurrent requests gaugehttp.client.request.duration — outgoing request latencydb.client.operation.duration — database query durationmessaging.process.duration — message processing timeHistogram bucket configuration: Default buckets work for most cases. Customize for specific SLOs:
const meterProvider = new MeterProvider({
views: [
new View({
instrumentName: 'http.server.request.duration',
aggregation: new ExplicitBucketHistogramAggregation([
5, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000,
]),
}),
],
});
RED method: Rate (requests/sec), Errors (error rate), Duration (latency distribution). These three metrics cover most monitoring needs for any service.
https://opentelemetry.io/docs/concepts/signals/metrics/