Development7 min read

PostgreSQL Optimisation for SaaS — Scaling Database Performance

How to optimise PostgreSQL for SaaS applications at scale. Indexing, query optimisation, connection pooling, and performance tuning for UK startups.

PostgreSQL is the default database for serious SaaS applications — but default configurations won't scale. As your user base grows, unoptimised Postgres becomes a bottleneck. This post covers the optimisation techniques that keep PostgreSQL performant from hundreds to millions of users.

The PostgreSQL SaaS performance killers

Most PostgreSQL performance issues in SaaS fall into predictable categories: N+1 queries — loading related data in loops instead of joins, missing indexes — table scans on large datasets, connection exhaustion — too many simultaneous connections, unbounded growth — tables without retention or archiving strategies, and slow queries without proper EXPLAIN ANALYZE investigation. The good news: these are all fixable with systematic optimisation.

Indexing strategy for multi-tenant SaaS

SaaS databases are typically multi-tenant, with most queries filtering by tenant_id. Effective indexing: composite indexes starting with tenant_id for tenant-scoped queries, partial indexes for common filtered subsets (WHERE active = true), covering indexes that include all columns a query needs (enables index-only scans), and GIN indexes for JSONB and array queries. Every index has a cost on writes — profile your query patterns and index the 20% of queries consuming 80% of time.

Query optimisation patterns

Beyond indexing, query structure matters:

  • Use EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) to understand query plans
  • Prefer JOINs over N+1 queries — single complex query beats multiple simple ones
  • Limit pagination with keyset pagination (cursor-based) for large datasets
  • Avoid SELECT * — retrieve only needed columns
  • Use CTEs (WITH clauses) for complex multi-stage queries
  • Materialised views for expensive aggregations that don't need real-time data

Connection pooling and PgBouncer

PostgreSQL has a hard limit on connections (typically 100 by default). SaaS applications with many concurrent users will hit this limit. The solution is PgBouncer — a lightweight connection pooler that sits between your application and Postgres. It maintains a pool of persistent connections and multiplexes application connections onto them. For serverless applications (Lambda, Vercel functions), connection pooling is essential — without it, each function instance creates new connections and quickly exhausts limits.

When to bring in database specialists

You need PostgreSQL optimisation expertise when: query response times exceed 500ms for user-facing operations, your database CPU consistently exceeds 60%, you're experiencing connection limit errors, costs for managed Postgres (RDS, Cloud SQL) are escalating, or you're planning significant scale events (launches, marketing campaigns). MoodBook Devs provides PostgreSQL performance optimisation for UK SaaS startups. We diagnose bottlenecks, implement indexing strategies, configure connection pooling, and tune configurations for your specific workload. Contact moodbook.uk/contact for database performance support.

Frequently asked questions

How do I know if my PostgreSQL database needs optimisation?
Key indicators: slow page loads (check database query times), high CPU usage on your database server, connection errors in application logs, and escalating costs on managed database services. A simple EXPLAIN ANALYZE on your slowest queries often reveals obvious optimisation opportunities.
Should we use ORM or raw SQL for performance?
Modern ORMs (Prisma, TypeORM, Drizzle) are performant for 95% of queries. Use raw SQL for: complex aggregations, queries with specific optimisation needs, and high-frequency operations where every millisecond matters. Profile first — premature optimisation wastes time.
When should we consider database sharding or read replicas?
Read replicas help when read traffic exceeds what a single instance can handle. Consider them when you're consistently at 70%+ CPU on your primary. Sharding is more complex — typically only needed when you're at hundreds of millions of rows or have strict data residency requirements requiring geographic separation.

Start today and get the first
update tomorrow

And don't worry, we roast
designs not humans!