TutorialsAIPostgreSQLMySQL

How to Track API Usage and Performance from Your Database

Every API you run leaves a trail. Requests come in, responses go out, errors happen, latency spikes and all of it gets recorded somewhere. The question is w...

Marcus Chen· Solutions EngineerApril 7, 20268 min read

Every API you run leaves a trail. Requests come in, responses go out, errors happen, latency spikes and all of it gets recorded somewhere. The question is whether you're actually watching it.

Most teams aren't. They rely on third-party APM tools, cloud provider dashboards, or they fly blind until a customer complains. But if you're already logging API activity to a database and most production systems do you have everything you need to build real-time visibility into your API's health and usage patterns.

This guide walks through exactly how to do that: what to log, how to query it, and how to build a live dashboard without spending hours writing SQL.

What Data to Store for API Tracking

Before you can analyze API usage, you need to capture it. The bare minimum is a table that records each request. Here's a practical schema for PostgreSQL or MySQL:

CREATE TABLE api_requests (
  id            BIGSERIAL PRIMARY KEY,
  timestamp     TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  endpoint      TEXT NOT NULL,
  method        TEXT NOT NULL,        -- GET, POST, PUT, DELETE
  status_code   INT NOT NULL,
  response_time_ms INT NOT NULL,
  user_id       TEXT,
  api_key_id    TEXT,
  ip_address    TEXT,
  request_size  INT,
  response_size INT,
  error_message TEXT
);

CREATE INDEX idx_api_requests_timestamp ON api_requests (timestamp DESC);
CREATE INDEX idx_api_requests_endpoint  ON api_requests (endpoint, timestamp DESC);
CREATE INDEX idx_api_requests_api_key   ON api_requests (api_key_id, timestamp DESC);

If you're on MongoDB, you'd store the same fields as a document per request. The key fields you cannot skip:

  • timestamp everything else depends on time-series analysis
  • endpoint which route was hit
  • status_code the difference between 200, 429, and 500 tells you everything
  • response_time_ms your latency number
  • api_key_id or user_id to see who is using what
  • Log this from your middleware layer. In Express (Node.js):

    app.use((req, res, next) => {
      const start = Date.now();
      res.on('finish', () => {
        db.query(
          `INSERT INTO api_requests
           (endpoint, method, status_code, response_time_ms, api_key_id, ip_address)
           VALUES ($1, $2, $3, $4, $5, $6)`,
          [req.path, req.method, res.statusCode, Date.now() - start,
           req.headers['x-api-key'], req.ip]
        );
      });
      next();
    });

    In Python/FastAPI, the same logic goes into a middleware function. In Rails, use an around_action or rack middleware.

    Key Metrics to Track

    Once you're logging requests, four metric categories matter most:

    1. Request volume total requests per hour/day, broken by endpoint and method. This tells you what's being used and when traffic peaks.

    2. Error rate percentage of requests returning 4xx and 5xx status codes. A sudden spike in 500s is your first warning of a broken deployment. A spike in 429s means clients are being throttled.

    3. Latency distribution don't just track average response time. Track p50, p90, and p99. An average of 120ms with a p99 of 4000ms means 1% of your users are waiting 4 seconds, and you'd never see that in the average.

    4. Usage by API key or user which customers are making the most requests, who's hitting rate limits, who went quiet (potential churn signal).

    Here's a query for p50/p90/p99 latency by endpoint over the last 24 hours:

    SELECT
      endpoint,
      COUNT(*)                                              AS request_count,
      PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY response_time_ms) AS p50_ms,
      PERCENTILE_CONT(0.90) WITHIN GROUP (ORDER BY response_time_ms) AS p90_ms,
      PERCENTILE_CONT(0.99) WITHIN GROUP (ORDER BY response_time_ms) AS p99_ms,
      ROUND(AVG(CASE WHEN status_code >= 500 THEN 1.0 ELSE 0 END) * 100, 2) AS error_rate_pct
    FROM api_requests
    WHERE timestamp > NOW() - INTERVAL '24 hours'
    GROUP BY endpoint
    ORDER BY request_count DESC;

    And for daily request volume by hour:

    SELECT
      DATE_TRUNC('hour', timestamp) AS hour,
      COUNT(*) AS total_requests,
      COUNT(*) FILTER (WHERE status_code >= 400) AS error_requests
    FROM api_requests
    WHERE timestamp > NOW() - INTERVAL '7 days'
    GROUP BY 1
    ORDER BY 1;

    Building a Live API Dashboard

    Running these queries manually is fine for one-off investigations. For ongoing visibility, you want a dashboard that refreshes automatically.

    The traditional path: set up Grafana or Metabase, write all your queries in their query editors, configure refresh intervals, wire up alerts. For a developer, it's doable but takes half a day. For a product manager or a founder who wants to monitor their own API, it's a real barrier.

    AI for Database handles this differently. You connect your database once, and then you ask questions in plain English:

  • "Show me the top 10 endpoints by request volume in the last 7 days"
  • "What's the error rate by endpoint this week?"
  • "Which API keys have made more than 1000 requests today?"
  • "Show me hourly p99 latency for the /payments endpoint this month"
  • AI for Database translates each question into SQL, runs it against your database, and returns a table or chart. You can then pin those results into a self-refreshing dashboard that updates on whatever schedule you set every hour, every 15 minutes, every day.

    This means your team can monitor API health without anyone needing to write or maintain SQL queries.

    Detecting Anomalies and Setting Alerts

    Dashboards tell you what's happening. Alerts tell you when something is wrong before you have to check.

    Two patterns worth implementing:

    Spike detection compare current hour's error count to the rolling average of the last 7 days at the same hour. A 3x spike is worth investigating.

    WITH baseline AS (
      SELECT
        EXTRACT(HOUR FROM timestamp) AS hour_of_day,
        AVG(daily_errors) AS avg_errors
      FROM (
        SELECT
          DATE_TRUNC('hour', timestamp) AS ts,
          EXTRACT(HOUR FROM timestamp) AS hour_of_day,
          COUNT(*) FILTER (WHERE status_code >= 500) AS daily_errors
        FROM api_requests
        WHERE timestamp > NOW() - INTERVAL '8 days'
          AND timestamp < NOW() - INTERVAL '1 day'
        GROUP BY 1, 2
      ) sub
      GROUP BY 1
    ),
    current_hour AS (
      SELECT COUNT(*) FILTER (WHERE status_code >= 500) AS errors
      FROM api_requests
      WHERE timestamp > DATE_TRUNC('hour', NOW())
    )
    SELECT
      c.errors AS current_errors,
      b.avg_errors AS expected_errors,
      ROUND(c.errors::numeric / NULLIF(b.avg_errors, 0), 2) AS spike_ratio
    FROM current_hour c
    CROSS JOIN baseline b
    WHERE b.hour_of_day = EXTRACT(HOUR FROM NOW());

    Latency degradation check if your p95 in the last 30 minutes is more than 50% higher than the 7-day baseline for the same endpoint.

    AI for Database's action workflows let you define conditions like "when the error rate for endpoint /api/checkout exceeds 5% in any 15-minute window, send a Slack message." The system queries your database on schedule and fires the action automatically no cron jobs, no custom alerting code to maintain.

    Segmenting Usage for Business Insights

    API performance data isn't just for engineers. It's a business signal.

    Customer health If a key customer's API usage drops 80% week-over-week, that's a churn risk. If a customer's usage is growing 20% month-over-month, they might be ready to upgrade.

    SELECT
      api_key_id,
      SUM(CASE WHEN timestamp > NOW() - INTERVAL '7 days' THEN 1 ELSE 0 END)  AS last_7_days,
      SUM(CASE WHEN timestamp BETWEEN NOW() - INTERVAL '14 days' AND NOW() - INTERVAL '7 days' THEN 1 ELSE 0 END) AS prior_7_days,
      ROUND(
        (SUM(CASE WHEN timestamp > NOW() - INTERVAL '7 days' THEN 1 ELSE 0 END)::numeric /
         NULLIF(SUM(CASE WHEN timestamp BETWEEN NOW() - INTERVAL '14 days' AND NOW() - INTERVAL '7 days' THEN 1 ELSE 0 END), 0) - 1) * 100, 1
      ) AS wow_change_pct
    FROM api_requests
    WHERE timestamp > NOW() - INTERVAL '14 days'
    GROUP BY api_key_id
    ORDER BY wow_change_pct ASC;

    Feature adoption endpoints map to features. If /api/export has 0.3% of total requests, your export feature isn't being discovered. If /api/analyze is your most-called endpoint, that's what users actually value.

    Rate limit exposure how many customers are hitting 429s? If it's more than 5%, you either have aggressive limits or customers whose usage patterns you haven't planned for.

    Putting It All Together

    API tracking doesn't require a separate observability platform if your data is already in a database. The pattern is:

  • Log every request to a structured table with timestamp, endpoint, status code, latency, and key/user ID.
  • Index on timestamp and endpoint so queries stay fast as volume grows.
  • Build queries for the four core metrics: volume, error rate, latency distribution, and per-key usage.
  • Put those queries into a dashboard that refreshes on a schedule.
  • Set up alerts for anomalies error rate spikes and latency degradation.
  • The SQL gets you the data. The dashboard gets it in front of the right people. The alerts make sure you find out about problems before your customers do.

    If your team includes people who can't or don't want to write SQL, AI for Database makes every part of this accessible. Connect your database, ask questions in plain English, build the dashboard, set up the workflow. No query writing, no infrastructure to maintain.

    Ready to try AI for Database?

    Query your database in plain English. No SQL required. Start free today.