Hi all –
We are using Postgres 8.2.3 as our Confluence backing store and when trying to backup the database at night we are seeing this in the logs:
<snip>
pg_amop_opc_strat_index: 1024 total in 1 blocks; 216 free (0 chunks); 808 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
MdSmgr: 8192 total in 1 blocks; 6376 free (0 chunks); 1816 used
LOCALLOCK hash: 24576 total in 2 blocks; 14112 free (4 chunks); 10464 used
Timezones: 49432 total in 2 blocks; 5968 free (0 chunks); 43464 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
2009-08-19 22:35:42 ERROR: out of memory
2009-08-19 22:35:42 DETAIL: Failed on request of size 536870912.
2009-08-19 22:35:42 STATEMENT: COPY public.attachmentdata (attachmentdataid, attversion, data, attachmentid) TO stdout;
Is there an easy way to give pg_dump more memory? I don’t see a command line option for it and I’m not a Postgres expert by any means. This is the script we are using to backup our DB (backup.cmd):
@ECHO OFF
SET BACKUPS_DIR=C:\backups
SET PGPASSWORD=*******
REM Set the backup file name
SET prefix=confluence_dbbackup_
SET basename=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.%time:~-11,2%.%time:~-8,2%.%time:~-5,2%
SET confluence_backup_path=%BACKUPS_DIR%\%basename%.dump
pg_dump --username=confluence --file="%confluence_backup_path%" --blobs --format c confluence
Thanks,
Chris