So what would cause "too many connections", if both nodes have the same mox_connections?
What does your application do when DB response time is slow? I've read that pod-type services can be configured to spawn more connections when the app thinks the DB isn't fast enough.
Have 2 nodes ( primary and standby postgres 15.6) in openshift kubernetes.
Patroni setup. 300gb data. No failover since last six months. Suddenly after failover, there were lot of issues such as too many connections and slowness.
Is it due to not analyze done in new node?
Is postgresql.conf configured the same on both nodes?
max_connections being lower on the replica node would certainly and immediately cause "too many connections" errors.
Vacuuming and statistics_are_ replicated: that data is in tables, so must be replicated). However, when they were last vacuumed and analyzed is apparently not on disk. Thus, the new primary can't know the number of tuples analyzed since the last ANALYZE, and the number of dead and inserted records since the last ANALYZE.
Thus, I'd do a vacuumdb --analyze-in-stages soon after the switch-over.