Most database monitoring advice is written for database administrators — people who live in pg_stat_activity, who know what a connection pool is, and who have alerting pipelines already set up. That's not most of the people who need to know when something is wrong.
If you're a founder, product manager, or ops lead, your monitoring needs are different. You don't need to know about slow query plans. You need to know when daily signups drop off a cliff, when a customer's subscription status flips unexpectedly, or when your revenue metrics stop making sense. Those are database problems too — they just require watching the data, not the database engine.
This guide covers how to set up meaningful real-time monitoring on your database data without a DBA, without stored procedures, and without spending a week on infrastructure.
What "Real-Time Database Monitoring" Actually Means for Most Teams
There are two distinct things people mean when they say database monitoring:
Infrastructure monitoring — Is the database server healthy? CPU, memory, disk I/O, query latency, connection counts. This is what tools like Datadog, PgHero, and AWS CloudWatch cover. If you're on a managed database service (Supabase, PlanetScale, RDS), a lot of this is handled for you.
Data monitoring — Is the data in the database what it should be? Are orders still coming in? Did that migration break something? Are key metrics within expected ranges?
The second category is where most non-technical teams get stuck. Your infrastructure is fine, but something is wrong with the numbers — and you find out two days later from a customer complaint.
This article focuses on data monitoring.
The Traditional Approach (And Why It Breaks Down)
The traditional way to watch database data is with triggers and stored procedures. You write a trigger on the orders table that fires when a row is inserted, which calls a stored procedure, which sends a notification somewhere.
-- Traditional PostgreSQL trigger approach
CREATE OR REPLACE FUNCTION notify_on_new_order()
RETURNS trigger AS $$
BEGIN
PERFORM pg_notify('new_order', row_to_json(NEW)::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER order_insert_trigger
AFTER INSERT ON orders
FOR EACH ROW EXECUTE FUNCTION notify_on_new_order();This works, but it has real downsides:
CREATE TRIGGER and CREATE FUNCTION permissions — not always available without DBA accessFor most small-to-medium teams, this is more infrastructure than the problem warrants.
A More Practical Approach: Query-Based Monitoring
Instead of triggers, the cleaner pattern is to run periodic monitoring queries — SQL that checks current state against expected thresholds — and alert when something is out of range.
The query looks like this:
-- Check if today's signups are below threshold
SELECT COUNT(*) AS today_signups
FROM users
WHERE created_at >= CURRENT_DATE
AND created_at < CURRENT_DATE + INTERVAL '1 day';You run this every hour. If today_signups drops below your threshold (say, 10 by 10am), you send a Slack message or an email.
The monitoring logic lives outside the database, which means:
The challenge used to be the scaffolding around this: you needed to schedule the queries, handle the threshold comparisons, manage credentials, and wire up the notifications. That's where most teams gave up and said "we'll set this up properly someday."
Setting Up Monitoring with Action Workflows
AI for Database has a feature built exactly for this: action workflows. You define a condition in plain English, set a threshold, and choose what happens when the condition is met.
For example:
Or:
Or:
Under the hood, AI for Database translates these conditions into SQL queries, runs them on a schedule against your connected database, and fires the action when the threshold is crossed. You don't write triggers. You don't write cron jobs. You describe what you want to watch, and the system watches it.
-- What runs under the hood for "daily signups below 30":
SELECT COUNT(*) AS signups_today
FROM users
WHERE created_at::date = CURRENT_DATE;
-- If result < 30 → trigger alertFive Monitoring Queries Every Product Team Should Have
Here are practical examples you can set up today, whether you're using AI for Database or building your own monitoring stack.
1. Daily new user count
SELECT COUNT(*) AS new_users
FROM users
WHERE created_at >= CURRENT_DATE;Alert threshold: below your typical daily average.
2. Orders stuck in processing
SELECT id, customer_id, created_at, status
FROM orders
WHERE status = 'processing'
AND created_at < NOW() - INTERVAL '2 hours';Alert threshold: any rows returned.
3. Revenue vs. prior day
SELECT
SUM(CASE WHEN created_at::date = CURRENT_DATE THEN amount ELSE 0 END) AS today,
SUM(CASE WHEN created_at::date = CURRENT_DATE - 1 THEN amount ELSE 0 END) AS yesterday
FROM orders
WHERE created_at >= CURRENT_DATE - 1
AND status = 'completed';Alert threshold: today < yesterday * 0.5 (50% drop).
4. Failed jobs in the queue
SELECT COUNT(*) AS failed_jobs
FROM background_jobs
WHERE status = 'failed'
AND updated_at >= NOW() - INTERVAL '1 hour';Alert threshold: more than 5 failures in the last hour.
5. Trial-to-paid conversion health
SELECT COUNT(*) AS trials_expiring_not_converted
FROM subscriptions
WHERE plan = 'trial'
AND trial_ends_at < NOW() + INTERVAL '24 hours'
AND converted_at IS NULL;Alert threshold: more than 10 expiring trials without conversion (could signal a billing issue).
Building a Monitoring Dashboard
Alerts tell you when something is wrong. A dashboard tells you what normal looks like so you can spot when things are drifting before they break.
The key metrics to track on a real-time database dashboard:
With AI for Database, you build these dashboards from plain English queries. "Show me daily signups for the last 30 days as a bar chart." The dashboard refreshes automatically on whatever schedule you set — no manual re-running, no stale data.
What You Don't Need a DBA For
A common misconception: "we need to hire a data engineer before we can do real monitoring." Here's what you actually need:
You need a DBA for:
You don't need a DBA for:
The second list is business analytics. It requires read access to your database and a tool that can run queries on a schedule. That's it.
Security Considerations for Database Monitoring
Before connecting any monitoring tool to your database:
Use a read-only connection. Create a dedicated monitoring user with SELECT permissions only.
-- PostgreSQL: create read-only monitoring user
CREATE USER monitoring_user WITH PASSWORD 'strong-password';
GRANT CONNECT ON DATABASE your_db TO monitoring_user;
GRANT USAGE ON SCHEMA public TO monitoring_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO monitoring_user;Use a read replica if you have one. Monitoring queries run on a schedule and add load. Pointing them at a replica keeps your primary database clean.
Scope access to specific tables. If your monitoring only needs orders, users, and subscriptions, only grant access to those tables.
Rotate credentials. Treat monitoring credentials like any other service credential — rotate them, don't hardcode them in scripts.
AI for Database connects read-only by default and supports scoped table access so you control exactly what the tool can see.