Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects - Mailing list pgsql-admin

From Sergey Klochkov
Subject Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Date
Msg-id 524AAE12.8@iqbuzz.ru
Whole thread Raw
In response to Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects  (Giuseppe Broccolo <giuseppe.broccolo@2ndquadrant.it>)
List pgsql-admin
No, it did not make any difference. And after looking through pg_dump.c
and pg_dump_sort.c, I cannot tell how it possibly could. See the
stacktrace that I've sent to the list.

Thanks.

On 01.10.2013 15:01, Giuseppe Broccolo wrote:
> Maybe you can performe your database changing some parameters properly:
>>
>> PostgreSQL configuration:
>>
>> listen_addresses = '*'          # what IP address(es) to listen on;
>> port = 5432                             # (change requires restart)
>> max_connections = 500                   # (change requires restart)
> Set it to 100, the highest value supported by PostgreSQL
>> shared_buffers = 16GB                  # min 128kB
> This value should not be higher than 8GB
>> temp_buffers = 64MB                     # min 800kB
>> work_mem = 512MB                        # min 64kB
>> maintenance_work_mem = 30000MB          # min 1MB
> Given RAM 96GB, you could set it up to 4800MB
>> checkpoint_segments = 70                # in logfile segments, min 1,
>> 16MB each
>> effective_cache_size = 50000MB
> Given RAM 96GB, you could set it up to 80GB
>>
>
> Hope it can help.
>
> Giuseppe.
>

--
Sergey Klochkov
klochkov@iqbuzz.ru


pgsql-admin by date:

Previous
From: Giuseppe Broccolo
Date:
Subject: Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Next
From: bricklen
Date:
Subject: Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects