Skip to main content
Back to blog
PostgreSQLPerformanceDatabase

PostgreSQL Performance: From Slow Queries to Sub-Millisecond

December 20, 202510 min read

The Problem

Our API p99 latency had crept up to 2.3 seconds. The culprit? Database queries that had grown with our data.

Step 1: Understanding the Query Planner

Before optimizing anything, we needed to understand what PostgreSQL was actually doing:

EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM orders
WHERE user_id = $1
AND status = 'active'
ORDER BY created_at DESC
LIMIT 20;

The output revealed sequential scans on a 50M row table.

Step 2: Strategic Indexing

We created composite indexes that matched our query patterns:

CREATE INDEX CONCURRENTLY idx_orders_user_status_created
ON orders (user_id, status, created_at DESC);

Key insight: column order in composite indexes matters. Put equality conditions first, range conditions last.

Step 3: Connection Pooling with PgBouncer

We switched from direct connections to PgBouncer in transaction mode. This alone reduced our connection overhead by 80%.

Step 4: Partitioning

For our largest tables, we implemented range partitioning by date:

CREATE TABLE orders (
  id BIGSERIAL,
  created_at TIMESTAMPTZ NOT NULL,
  -- ...
) PARTITION BY RANGE (created_at);

Results

  • p99 latency: 2.3s → 12ms
  • Average query time: 450ms → 0.8ms
  • Database CPU: 85% → 15%

The lesson: most performance problems are solved with better indexing, not bigger hardware.

Related Projects

Related Articles