Here's a situation most operations teams recognize: you find out something went wrong in your data hours after it happened. Signups dropped, an order processing job silently failed, a customer's account hit a limit — and nobody knew until someone happened to run a report.
The traditional answer to this problem is database triggers and stored procedures. Set a trigger on a table, write a procedure that fires a notification when a condition is met. It works, but it has a cost: someone has to write and maintain that code, it lives inside the database where it's invisible to most of your team, and every new alert condition means another round of DBA time.
This guide covers a different approach: monitoring your database for conditions and sending alerts without writing database code. No stored procedures, no triggers, no DBA access required.
Why Database Triggers Fall Short for Business Alerting
Triggers made sense when databases were the primary application layer and DBAs were the people closest to the data. In most modern setups, that's no longer true.
The problems with trigger-based alerting in practice:
They're tightly coupled to schema changes. If you rename a column or restructure a table, triggers break. Silently, sometimes. You might not discover the breakage until the next time you needed the alert.
They require database access to create and edit. Business conditions change — "alert when daily revenue drops below $10,000" becomes "alert when weekly revenue drops below $60,000" — but making that change requires someone with database admin credentials.
They can affect query performance. A trigger fires synchronously on every matching row operation. A poorly written trigger on a high-traffic table becomes a performance problem.
They're hard to observe. Triggers don't show up in your application code, your git history, or your monitoring dashboards. They live in the database, often undocumented, discovered only when someone starts investigating a production issue.
There are legitimate use cases for triggers — enforcing referential integrity, auditing changes at the row level — but "send an alert when a business metric crosses a threshold" is usually not one of them.
The Alternative: External Database Monitoring
Instead of putting logic inside the database, you can run it outside: a separate system queries your database on a schedule, evaluates conditions, and fires alerts when conditions are met.
This approach has several practical advantages:
The tradeoff is that this approach polls rather than reacts — checks run every few minutes rather than on every row change. For most business alerting scenarios (revenue anomalies, signup volume, error counts), polling every 5-15 minutes is more than fast enough.
What Conditions Are Worth Alerting On?
Before setting up any tooling, it's worth thinking through which database conditions actually need alerts. The answer varies by team, but common high-value ones include:
Volume drops: Daily signups, orders, or transactions falling below a threshold often indicate a funnel problem — a broken form, a payment processing error, a marketing campaign that stopped running.
SELECT COUNT(*) as signups_today
FROM users
WHERE created_at >= CURRENT_DATE;
-- Alert if this is below 40 by 2pmSpike detection: Unusual increases can indicate spam, data import issues, or abuse.
SELECT COUNT(*) as errors_last_hour
FROM error_logs
WHERE logged_at >= NOW() - INTERVAL '1 hour';
-- Alert if above 500Overdue records: Jobs that should complete within a time window but haven't.
SELECT COUNT(*) as stalled_jobs
FROM background_jobs
WHERE status = 'running'
AND started_at < NOW() - INTERVAL '30 minutes';
-- Alert if above 0Business metric thresholds: Revenue, churn, utilization — anything with an agreed acceptable range.
SELECT COALESCE(SUM(amount), 0) as mrr
FROM subscriptions
WHERE status = 'active';
-- Alert if drops more than 5% from yesterdayData quality issues: Orphaned records, nulls in required fields, duplicates.
SELECT COUNT(*) as orders_without_customer
FROM orders o
LEFT JOIN customers c ON o.customer_id = c.id
WHERE c.id IS NULL;
-- Alert if above 0Setting Up Alerts with AI for Database
AI for Database has an action workflow system built specifically for this use case: monitor a database condition, fire an alert when it's met.
The setup doesn't require writing database code. You define:
For example: "Check every hour whether any background jobs have been running for more than 20 minutes. If yes, send a Slack alert to #engineering-alerts with the job IDs."
That alert would previously have required a trigger on the background_jobs table, a stored procedure to format the notification, and some external notification plumbing. Now it's a few form fields.
Connecting the database: AI for Database connects over a standard database connection string. It needs read access (read-only credentials are fine for most alerting use cases and recommended for security). Supported databases include PostgreSQL, MySQL, MongoDB, Supabase, BigQuery, MS SQL Server, SQLite, and PlanetScale.
Defining conditions in plain English: Instead of writing SQL, you can describe the condition you want to watch. The system interprets the condition, shows you the generated SQL for verification, and runs it on your schedule.
Setting up the action: Slack, email, and webhooks are supported. For Slack, you connect your workspace once and can then target any channel. For webhooks, you define the URL and AI for Database sends a POST request with the alert data when the condition fires.
Structuring Alerts So They're Actually Useful
Alert systems tend to fail in one of two ways: either they send too many alerts and people start ignoring them, or the alerts they send are too vague to act on. Both are failure modes.
A few principles that help:
Include the relevant data in the alert, not just the condition. "Daily signups alert fired" is useless. "Daily signups: 12 (threshold: 30). Past 7 days average: 47. Check the signup form — last successful signup was 3h ago." is actionable.
Alert on rate of change, not just absolute values. A sudden 50% drop in a metric is more significant than a value being below a static threshold. If your baseline shifts over time, static thresholds drift from reality.
Include a link to a dashboard or query. Whoever gets the alert should be able to immediately see more context. If you've built dashboards in AI for Database, link directly to them from the alert.
Be deliberate about alert frequency. An alert for "any error in the last 5 minutes" that fires 40 times a night trains people to mute it. An alert for "more than 100 errors in the last hour" trains people to pay attention.
What About Real-Time Alerting?
Polling every 5-15 minutes covers the vast majority of business alerting use cases. Most business processes don't need sub-minute alerting — if something goes wrong, knowing within 10 minutes is usually fine.
For genuine real-time needs (fraud detection, system health monitoring, real-time inventory), you're typically looking at a different architecture: database change data capture (CDC), a streaming platform like Kafka or Redpanda, and a consumer that processes changes as they happen. That's a legitimate engineering investment for teams that need it.
Most teams don't. They need "tell me if something looks wrong before the start of business tomorrow," and 15-minute polling does that fine.
Getting Started
The simplest path to database alerting without triggers:
That's the entire setup for basic alerting. You can add more conditions from there without touching the database again.
The goal isn't to build a comprehensive monitoring system on day one. It's to stop finding out about data problems hours after they happen. Start with one condition that would have caught a real problem in the last month — that's usually enough to see the value.
Try AI for Database free at aifordatabase.com — no SQL knowledge required, and the first database connection takes about two minutes.