EngineeringAIPostgreSQLMySQL

How to Audit Your Database Without Expensive Tools

Every database accumulates questions its owners should be able to answer: Who ran that query last Tuesday? When did this row change? Why did the orders table...

Marcus Chen· Solutions EngineerApril 9, 20267 min read

Every database accumulates questions its owners should be able to answer: Who ran that query last Tuesday? When did this row change? Why did the orders table grow by 40% overnight? These are auditing questions, and most teams either ignore them (until something goes wrong) or pay for an enterprise monitoring suite that costs more than their entire dev tooling budget.

Neither option makes sense for the majority of companies. The good news is that you can build a practical, reliable database audit system using tools you either already have or can set up in an afternoon and for the parts that still feel like too much manual work, AI-powered tools like AI for Database can handle the query layer entirely.

-

What Database Auditing Actually Means

"Database auditing" sounds like compliance jargon, but it's really just answering three questions on demand:

  • What changed? Which rows were inserted, updated, or deleted, and when?
  • Who did it? Which user or application connection made the change?
  • What queries ran? What SQL statements hit the database, at what time, and how long did they take?
  • A full enterprise audit solution like Oracle Audit Vault or IBM Guardium answers all three in real time, with dashboards, reporting, and automated policy enforcement. They're also priced for Fortune 500 procurement teams.

    For everyone else, you can cover 90% of the same ground with a combination of built-in database features and a bit of structure.

    -

    Step 1: Turn On Query Logging

    Every major database has built-in query logging. Most teams leave it off because it generates volume and they're not sure what to do with the output. Here's how to enable it in the databases you're most likely using.

    PostgreSQL

    Edit postgresql.conf:

    log_min_duration_statement = 1000 , log queries over 1 second
    log_statement = 'ddl'             , always log schema changes
    log_connections = on
    log_disconnections = on

    Then reload:

    SELECT pg_reload_conf();

    To see what's currently logged:

    SHOW log_destination;
    SHOW logging_collector;

    MySQL / MariaDB

    In my.cnf:

    general_log = 1
    general_log_file = /var/log/mysql/general.log
    slow_query_log = 1
    slow_query_log_file = /var/log/mysql/slow.log
    long_query_time = 1

    What to actually log

    You don't need every query logged forever. A practical setup:

  • DDL statements always (CREATE, ALTER, DROP) these are schema changes and should be permanent audit records
  • Slow queries (anything over 1 second) useful for performance auditing
  • Authentication events connections and disconnections
  • DML on sensitive tables selectively log UPDATE/DELETE on tables like users, payments, orders
  • -

    Step 2: Create an Audit Trail Table

    Query logs tell you what ran. For data changes, you need to capture what changed at the row level. The standard approach is audit tables with triggers and while triggers get a bad reputation for complexity, audit triggers are one of their most legitimate uses.

    Here's a generic PostgreSQL audit table:

    CREATE TABLE audit_log (
        id          BIGSERIAL PRIMARY KEY,
        table_name  TEXT NOT NULL,
        operation   TEXT NOT NULL,       , INSERT, UPDATE, DELETE
        row_id      BIGINT,
        old_data    JSONB,
        new_data    JSONB,
        changed_by  TEXT DEFAULT current_user,
        changed_at  TIMESTAMPTZ DEFAULT now()
    );

    And a trigger function that works across any table:

    CREATE OR REPLACE FUNCTION audit_trigger_function()
    RETURNS TRIGGER AS $$
    BEGIN
        IF TG_OP = 'INSERT' THEN
            INSERT INTO audit_log(table_name, operation, row_id, new_data)
            VALUES (TG_TABLE_NAME, 'INSERT', NEW.id, row_to_json(NEW));
        ELSIF TG_OP = 'UPDATE' THEN
            INSERT INTO audit_log(table_name, operation, row_id, old_data, new_data)
            VALUES (TG_TABLE_NAME, 'UPDATE', NEW.id, row_to_json(OLD), row_to_json(NEW));
        ELSIF TG_OP = 'DELETE' THEN
            INSERT INTO audit_log(table_name, operation, row_id, old_data)
            VALUES (TG_TABLE_NAME, 'DELETE', OLD.id, row_to_json(OLD));
        END IF;
        RETURN NULL;
    END;
    $$ LANGUAGE plpgsql;

    Attach it to a specific table:

    CREATE TRIGGER audit_users
    AFTER INSERT OR UPDATE OR DELETE ON users
    FOR EACH ROW EXECUTE FUNCTION audit_trigger_function();

    Now every change to users gets recorded automatically with a before/after snapshot. You can query the audit log directly:

     What changed on a specific user?
    SELECT operation, old_data, new_data, changed_at
    FROM audit_log
    WHERE table_name = 'users'
      AND row_id = 12345
    ORDER BY changed_at DESC;
    
     Who deleted records from the orders table this week?
    SELECT changed_by, COUNT(*), MIN(changed_at), MAX(changed_at)
    FROM audit_log
    WHERE table_name = 'orders'
      AND operation = 'DELETE'
      AND changed_at > now() - interval '7 days'
    GROUP BY changed_by;

    -

    Step 3: Track Schema Changes

    Data changes are important. Schema changes are critical. A DROP COLUMN or ALTER TABLE can silently break applications and reports. The easiest way to track schema changes is the PostgreSQL pg_catalog:

     Tables created in the last 30 days
    SELECT schemaname, tablename, tableowner
    FROM pg_tables
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
    ORDER BY tableowner;
    
     Recent DDL changes (if pgaudit extension is enabled)
    SELECT event_time, command_tag, object_type, object_name
    FROM pgaudit.log
    WHERE command_tag IN ('CREATE TABLE', 'ALTER TABLE', 'DROP TABLE')
    ORDER BY event_time DESC
    LIMIT 20;

    If you're on a managed database like Supabase, RDS, or Neon, check whether the pgaudit extension is available it's the most complete DDL logging solution for PostgreSQL and requires no manual trigger setup.

    -

    Step 4: Make Audit Data Queryable by Non-Technical Teams

    Here's the practical problem: your audit tables and query logs are useful, but accessing them still requires someone who can write SQL. Most of the people who need audit information ops managers, compliance teams, customer support aren't going to do that.

    This is where AI for Database helps directly. Once your audit tables are set up, you can connect your database to AI for Database and let non-technical teammates query the audit log in plain English:

  • "Show me all changes to the subscriptions table in the last 24 hours"
  • "Who deleted records from the customers table this month?"
  • "What were the top 10 slowest queries yesterday?"
  • The tool translates those questions into SQL, runs them against your actual database, and returns results as a table or chart no SQL knowledge required. You can also build a self-refreshing dashboard pinned to your audit tables so the security or compliance team has a live view of database activity without ever needing to ask an engineer.

    -

    Step 5: Set Up Automated Alerts for Suspicious Activity

    Static audit logs are reactive you check them after something has already gone wrong. Combine them with automated monitoring and you can catch problems in real time.

    Some patterns worth alerting on:

     Alert when a single user deletes more than 100 rows in 10 minutes
    SELECT changed_by, COUNT(*) AS delete_count
    FROM audit_log
    WHERE operation = 'DELETE'
      AND changed_at > now() - interval '10 minutes'
    GROUP BY changed_by
    HAVING COUNT(*) > 100;
    
     Alert when the payments table is accessed outside business hours
    SELECT *
    FROM audit_log
    WHERE table_name = 'payments'
      AND EXTRACT(HOUR FROM changed_at) NOT BETWEEN 8 AND 18;

    With AI for Database action workflows, you can turn these queries into automated monitors: define the condition, specify what to do when it triggers (Slack message, email, webhook), and the system watches your database continuously without any polling code on your end.

    -

    What You're Not Covering (And Whether That Matters)

    This approach has real limitations worth acknowledging:

  • Application-layer changes: If your app uses a shared database user (most do), the changed_by field captures the app's username, not the individual human. To get individual user attribution, your application needs to set SET LOCAL app.current_user_id = 42 at the start of each connection or transaction.
  • Read auditing: Trigger-based audit logs only capture writes. If you need to audit SELECT statements for compliance (HIPAA, for example), you'll need query logging plus pgaudit.
  • High-volume tables: On tables with millions of writes per day, audit triggers add write overhead. Benchmark before enabling on hot paths.
  • Log retention: Query logs and audit tables need retention policies. Set up a cron job or use your database's built-in log rotation.
  • For most companies below enterprise scale, these limitations are manageable. You get 90% of the compliance and debugging value at 5% of the cost.

    -

    Ready to try AI for Database?

    Query your database in plain English. No SQL required. Start free today.