Thread: Cron'd dumpall failing?

Cron'd dumpall failing?

From
Kenneth Downs
Date:
I truly hoping I'm missing something silly here.  I've got a cron job to
run a dumpall each early am.  It fails, and I get a handful of emails.
The first reads like this:

pg_dump: [archiver (db)] connection to database "adocs" failed: FATAL:  sorry, too many clients already
pg_dumpall: pg_dump failed on database "adocs", exiting


...and then as we go along we get this one repeating for each database:

pg_dump: WARNING:  out of shared memory
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  out of shared memory
HINT:  You may need to increase max_locks_per_transaction.
pg_dump: The command was: SELECT sequence_name, last_value, increment_by, CASE WHEN increment_by > 0 AND max_value =
9223372036854775807THEN NULL      WHEN increment_by < 0 AND max_value = -1 THEN NULL      ELSE max_value END AS
max_value,CASE WHEN increment_by > 0 AND min_value = 1 THEN NULL      WHEN increment_by < 0 AND min_value =
-9223372036854775807THEN NULL      ELSE min_value END AS min_value, cache_value, is_cycled, is_called from tabproj_skey 
pg_dumpall: pg_dump failed on database "XXXXX", exiting




The cron entry (for user root) is

* 1 * * * /root/dumpall.sh > /dev/null

and the routine in question is this:

pg_dumpall -U postgres  > /home/bups/bsource/pg/dhost2.dumpall
chown bups:root /home/bups/bsource/pg/dhost2.dumpall
chmod 600       /home/bups/bsource/pg/dhost2.dumpall




Re: Cron'd dumpall failing?

From
Tom Lane
Date:
Kenneth Downs <ken@secdat.com> writes:
> pg_dump: [archiver (db)] connection to database "adocs" failed: FATAL:  sorry, too many clients already

you need to increase max_connections and/or superuser_reserved_connections

> pg_dump: Error message from server: ERROR:  out of shared memory
> HINT:  You may need to increase max_locks_per_transaction.

you need to increase max_locks_per_transaction

            regards, tom lane