Supported Databases › PostgreSQL RDS
PostgreSQL RDS Performance Investigation with MCP Workflows
Amazon RDS for PostgreSQL adds managed infrastructure on top of standard PostgreSQL, which simplifies operations but also introduces cloud-specific performance patterns. DatabaseMCP structures the investigation workflow for PostgreSQL RDS to account for both core PostgreSQL diagnostics and RDS-layer considerations.
Common PostgreSQL RDS performance problems
- Instance class limits: CPU, memory, and network throughput are bounded by the RDS instance class. Workloads that exceed these limits see latency increases and connection queuing that are hard to distinguish from query-level problems without resource metrics.
- Storage IOPS saturation: General Purpose (gp2/gp3) volumes have IOPS limits based on volume size. Write-heavy workloads on smaller volumes hit these limits quickly, causing I/O wait to dominate query execution time.
- Read replica replication lag: High write throughput, long-running transactions, or DDL on the primary can cause replica lag to grow, making read replica data stale and redirecting traffic back to the primary under failover scenarios.
- Parameter group misconfiguration: RDS manages PostgreSQL configuration through parameter groups. Settings like work_mem, shared_buffers, autovacuum parameters, and max_connections are common sources of performance gaps when left at defaults for the wrong workload profile.
- Slow queries and lock contention: The same PostgreSQL-level issues apply—sequential scans, autovacuum lag, lock contention, and idle-in-transaction sessions all occur on RDS instances and require query-level investigation in addition to infrastructure monitoring.
MCP investigation workflow for PostgreSQL RDS
- RDS CloudWatch and Enhanced Monitoring: Start with instance-level CPU, FreeableMemory, ReadIOPS, WriteIOPS, and DatabaseConnections metrics to establish which resource layer is under pressure.
- pg_stat_statements query analysis: Identify top queries by total time and call frequency. Correlate high-cost queries with infrastructure metric spikes.
- Vacuum and bloat review: Check autovacuum worker activity, dead tuple counts, and table bloat to identify maintenance gaps that degrade read and write performance.
- Replication lag and replica health: Monitor replica lag for multi-AZ and read replica deployments to identify replication bottlenecks before they affect application read routing.
- Historical incident correlation: Compare performance metrics across time to identify whether a degradation aligns with a deployment, configuration change, or workload shift.
PostgreSQL RDS and Aurora
For serverless or multi-AZ cluster deployments using Aurora's storage engine, see the PostgreSQL Aurora investigation guide. For self-hosted PostgreSQL, see PostgreSQL performance investigation.
Get started
See platform features on minidba.com