Arnau a écrit :
> Hi all,
>
>> I've got a DB in production that is bigger than 2GB that dumping it
>> takes more than 12 hours. I have a new server to replace this old one
>> where I have restore the DB's dump. The problem is I can't afford to
>> have the server out of business for so long, so I need your advice about
>> how you'd do this dump/restore. The big amount of data is placed in two
>> tables (statistics data), so I was thinking in dump/restore all except
>> this two tables and once the server is running again I'd dump/restore
>> this data. The problem is I don't know how exactly do this.
>>
>> Any suggestion?
>>
>> Thanks
>
> Jeff answer made me check what were the configuration parameters.
> That machine had the default ones, so I tweaked a bit them and now I
> got a dump in about 2 hours. To dump the DB I'm using the following
> command:
>
> /usr/bin/pg_dump -o -b -Fc $db > $backup_file
>
> And as result I got a file of 2.2GB. The improvement has been quite
> big but still very far from the 10-15 minutes that Jeff says or the
> Thomas'3 minutes.
>
> The version I'm running is a 7.4.2 and the postgresql.conf
> parameters are the following:
>
> # - Memory -
>
> shared_buffers = 10000 # min 16, at least max_connections*2,
> 8KB each
> sort_mem = 10240 # min 64, size in KB
> vacuum_mem = 81920 # min 1024, size in KB
>
> # - Free Space Map -
>
> max_fsm_pages = 40000 # min max_fsm_relations*16, 6 bytes each
> max_fsm_relations = 2000 # min 100, ~50 bytes each
>
>
> Any suggestions the even reduce more the dump period?
>
> Thank you very much.
To reduce dump period I suggest you to use the unix pipes :
pgdumps $PSOPTIONS | psql $database
I make always like this as it 's a lot faster
Olivier