From harness-claude
Implements immutable database audit trails via triggers or app-level logging to track who changed what, when, and why. For compliance (SOX, HIPAA, GDPR), security audits, debugging, rollbacks.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> Recording who changed what, when, and why, using trigger-based or application-level change tracking with immutable append-only logs.
Implements trigger-based audit logging for PostgreSQL and MySQL to track data changes (INSERT/UPDATE/DELETE) with metadata for compliance and security monitoring.
Models temporal data with valid-time, transaction-time, and bitemporal dimensions in PostgreSQL using range types and exclusion constraints. Enables time-travel queries for audits, compliance, price histories, and HR reporting.
Implements forensic data auditing, anomaly detection, fraud prevention, and audit trail analysis in TypeScript apps with database middleware patterns.
Share bugs, ideas, or general feedback.
Recording who changed what, when, and why, using trigger-based or application-level change tracking with immutable append-only logs.
Two approaches to change tracking:
1. Trigger-based audit:
Database triggers fire on every INSERT/UPDATE/DELETE and write to an audit table automatically. Advantages: catches all changes including raw SQL, migrations, and admin console queries. Disadvantages: tight coupling to schema, performance overhead on high-write tables, cannot capture application context (which user, which API endpoint).
2. Application-level audit:
Application code writes audit records before or after data changes. Advantages: captures business context (user ID, request ID, reason for change), can be asynchronous. Disadvantages: bypassable by raw SQL or migrations, requires discipline in every write path.
Audit table schema:
CREATE TABLE audit_log (
id bigserial PRIMARY KEY,
table_name varchar NOT NULL,
record_id varchar NOT NULL,
action varchar NOT NULL CHECK (action IN ('INSERT', 'UPDATE', 'DELETE')),
old_values jsonb,
new_values jsonb,
changed_by varchar,
changed_at timestamptz NOT NULL DEFAULT now(),
context jsonb
);
CREATE INDEX idx_audit_table_record ON audit_log (table_name, record_id, changed_at);
Immutability enforcement -- the audit table must never allow UPDATE or DELETE:
CREATE RULE no_update_audit AS ON UPDATE TO audit_log DO INSTEAD NOTHING;
CREATE RULE no_delete_audit AS ON DELETE TO audit_log DO INSTEAD NOTHING;
Alternatively, use row-level security policies to prevent modifications by all roles except a dedicated audit reader role.
Financial transactions table with trigger-based audit:
1. Generic audit trigger function:
CREATE OR REPLACE FUNCTION audit_trigger_func()
RETURNS trigger AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
INSERT INTO audit_log (table_name, record_id, action, new_values, changed_by)
VALUES (TG_TABLE_NAME, NEW.id::text, 'INSERT', to_jsonb(NEW),
current_setting('app.current_user', true));
RETURN NEW;
ELSIF TG_OP = 'UPDATE' THEN
INSERT INTO audit_log (table_name, record_id, action, old_values, new_values, changed_by)
VALUES (TG_TABLE_NAME, NEW.id::text, 'UPDATE', to_jsonb(OLD), to_jsonb(NEW),
current_setting('app.current_user', true));
RETURN NEW;
ELSIF TG_OP = 'DELETE' THEN
INSERT INTO audit_log (table_name, record_id, action, old_values, changed_by)
VALUES (TG_TABLE_NAME, OLD.id::text, 'DELETE', to_jsonb(OLD),
current_setting('app.current_user', true));
RETURN OLD;
END IF;
END;
$$ LANGUAGE plpgsql;
2. Attach trigger to the transactions table:
CREATE TRIGGER audit_transactions
AFTER INSERT OR UPDATE OR DELETE ON transactions
FOR EACH ROW EXECUTE FUNCTION audit_trigger_func();
3. Performing an update and viewing the audit record:
-- Set application context before the transaction
SET LOCAL app.current_user = 'user_42';
SET LOCAL app.request_id = 'req_abc123';
UPDATE transactions SET amount = 150.00 WHERE id = 1;
4. Querying the audit trail:
SELECT action, old_values->>'amount' AS old_amount,
new_values->>'amount' AS new_amount, changed_by, changed_at
FROM audit_log
WHERE table_name = 'transactions' AND record_id = '1'
ORDER BY changed_at;
Returns the full change history: who changed the amount, from what value, to what value, and when.
transactions_audit, accounts_audit, users_audit leads to schema explosion. Use a single generic audit table with table_name and JSONB values.hstore for row diffs -- compute only changed columns:
CREATE EXTENSION hstore;
-- In the trigger function, log only changed fields:
INSERT INTO audit_log (table_name, record_id, action, old_values, new_values)
VALUES (
TG_TABLE_NAME, NEW.id::text, 'UPDATE',
to_jsonb(hstore(OLD) - hstore(NEW)), -- removed/changed fields
to_jsonb(hstore(NEW) - hstore(OLD)) -- new/changed fields
);
This stores only the diff, dramatically reducing audit log storage for tables with many columns where most updates touch only 1-2 fields.
pg_audit extension for statement-level audit logging -- records which SQL statements were executed and by which database role. Complementary to row-level audit triggers.
Passing application context to triggers:
-- In the application, before each transaction:
SET LOCAL app.current_user = 'user_42';
SET LOCAL app.request_id = 'req_abc123';
-- In the trigger, read the context:
current_setting('app.current_user', true) -- true = return NULL if not set
SET LOCAL scopes the setting to the current transaction. This bridges the gap between application context and database triggers.
Change Data Capture (CDC) with logical replication avoids trigger overhead entirely:
pgoutput plugin) streams row changes to consumersEvent sourcing (see microservices-event-sourcing) is an audit-native architecture where the event log IS the source of truth. Rather than auditing a mutable table, every state change is an immutable event. Consider event sourcing when audit is a primary requirement, not an afterthought.
Audit log partitioning by month with automated retention:
CREATE TABLE audit_log (
-- columns as above
) PARTITION BY RANGE (changed_at);
-- Create monthly partitions, drop after retention period
-- Use pg_partman for automation
JSONB diff computation for efficient storage:
-- Store only changed fields instead of entire row snapshots
-- Reduces audit storage by 80-90% for wide tables
MySQL triggers use OLD and NEW row references similar to PostgreSQL:
CREATE TRIGGER audit_transactions
AFTER UPDATE ON transactions
FOR EACH ROW
BEGIN
INSERT INTO audit_log (table_name, record_id, action, old_values, new_values)
VALUES ('transactions', OLD.id, 'UPDATE',
JSON_OBJECT('amount', OLD.amount), JSON_OBJECT('amount', NEW.amount));
END;
MySQL lacks hstore -- use JSON_OBJECT() to serialize row values to JSON.
MySQL does not support CREATE RULE for immutability. Enforce with a BEFORE trigger:
CREATE TRIGGER prevent_audit_delete
BEFORE DELETE ON audit_log
FOR EACH ROW
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Audit log rows cannot be deleted';
MySQL binlog serves as a CDC source -- the Debezium MySQL connector reads binlog events for change data capture.
MySQL lacks SET LOCAL for passing application context to triggers. Use session variables (SET @app_user = 'user_42') and read them in the trigger, but note these persist for the entire session, not just the current transaction.
Fintech platform with SOX compliance. All changes to accounts, transactions, and users tables tracked via the generic trigger function. Audit table partitioned by month, retained for 7 years (regulatory requirement). Volume: 50M audit rows per month. Query pattern: "all changes to account X in last 90 days" completes in 15ms using the composite index on (table_name, record_id, changed_at) with partition pruning. Application context (request_id, user_id, IP address) passed via SET LOCAL before each transaction, stored in the audit context JSONB column. During a SOX audit, the team reconstructed the complete state of an account at any point in the past 7 years by replaying the audit log.