Thread: Out of shared memory while creating a backup with pg_dump

Out of shared memory while creating a backup with pg_dump

From
Date:
Hi,

while creating a backup with pg_dump and parallel write accesses to the database following error occured:
2014-09-24 07:46:46 ... ERROR:  out of shared memory
2014-09-24 07:46:46 ... HINT:  You might need to increase max_pred_locks_per_transaction.

I´m running postgresql 9.3.4 on Ubuntu 12.04 server. The server has 8GB RAM and 4 cores.
/proc/sys/kernel/shmmax: 2147483648
/proc/sys/kernel/shmall: 2097152

The original configuration was changed in only a few relevant points:
* shared_buffers = 128MB
* work_mem = 10MB

We run a hot standby slave, therefore the following changes were necessary:
* wal_level = hot_standby
* max_wal_senders = 1
* wal_keep_segments = 64
* hot_standby = on
* hot_standby_feedback = on


I didn´t understand the HINT:
You might need to increase max_pred_locks_per_transaction.

What is the reason for the error, that the shared memory ran out, or that the number of predicate locks is too small?
If i increase max_pred_locks_per_transaction i get probably even faster an "out of shared memory" error.

What is the best way to fix the problem?

The database consists of approximately 150 tables and is about 140GB in size. Where one table with 130 GB (~280000000
rows)requires the most space. 

Please help me to troubleshoot this severe problem. I will feedback any required information.

Thanks in advance
Christian

________________________________
manroland web systems GmbH -- Managing Director: Joern Gossé
Registered Office: Augsburg -- Trade Register: AG Augsburg -- HRB-No.: 26816 -- VAT: DE281389840

Confidentiality note:
This eMail and any files transmitted with it are confidential and intended solely for the use of the individual or
entityto whom they are addressed. If you are not the intended recipient, you are hereby notified that any use or
disseminationof this communication is strictly prohibited. If you have received this eMail in error, then please delete
thiseMail. 

! Please consider your environmental responsibility before printing this eMail !
________________________________


Re: Out of shared memory while creating a backup with pg_dump

From
Shreeyansh dba
Date:


On Wed, Sep 24, 2014 at 2:35 PM, <christian.echerer@manroland-web.com> wrote:
Hi,

while creating a backup with pg_dump and parallel write accesses to the database following error occured:
2014-09-24 07:46:46 ... ERROR:  out of shared memory
2014-09-24 07:46:46 ... HINT:  You might need to increase max_pred_locks_per_transaction.

I´m running postgresql 9.3.4 on Ubuntu 12.04 server. The server has 8GB RAM and 4 cores.
/proc/sys/kernel/shmmax: 2147483648
/proc/sys/kernel/shmall: 2097152

The original configuration was changed in only a few relevant points:
* shared_buffers = 128MB
* work_mem = 10MB

We run a hot standby slave, therefore the following changes were necessary:
* wal_level = hot_standby
* max_wal_senders = 1
* wal_keep_segments = 64
* hot_standby = on
* hot_standby_feedback = on


I didn´t understand the HINT:
You might need to increase max_pred_locks_per_transaction.

What is the reason for the error, that the shared memory ran out, or that the number of predicate locks is too small?
If i increase max_pred_locks_per_transaction i get probably even faster an "out of shared memory" error.

What is the best way to fix the problem?

The database consists of approximately 150 tables and is about 140GB in size. Where one table with 130 GB (~280000000 rows) requires the most space.

Please help me to troubleshoot this severe problem. I will feedback any required information.

Thanks in advance
Christian



I suspect that this kind of errors occur due to less size of shared_buffer,work_mem and maintenance_work_mem.
Solution to resolve this kind of errors , by increasing the size of shared_buffers ,work_mem  and maintenance_work_mem.
As your RAM is 8 GB , try to perform base-backup by setting the parameters in postgresql.conf as given shared_buffers=(app 1GB to 2GB),work_mem=512MB,maintenance_work_mem=512MB.

This might resolve your issue of pg_basebackup.


Regards,
Chetan Sharma,






 
______________
manroland web systems GmbH -- Managing Director: Joern Gossé
Registered Office: Augsburg -- Trade Register: AG Augsburg -- HRB-No.: 26816 -- VAT: DE281389840

Confidentiality note:
This eMail and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient, you are hereby notified that any use or dissemination of this communication is strictly prohibited. If you have received this eMail in error, then please delete this eMail.

! Please consider your environmental responsibility before printing this eMail !
________________________________


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: Out of shared memory while creating a backup with pg_dump

From
Raghu Ram
Date:
On Wed, Sep 24, 2014 at 2:35 PM, <christian.echerer@manroland-web.com> wrote:
Hi,

while creating a backup with pg_dump and parallel write accesses to the database following error occured:
2014-09-24 07:46:46 ... ERROR:  out of shared memory
2014-09-24 07:46:46 ... HINT:  You might need to increase max_pred_locks_per_transaction.


This a new parameter introduced in PostgreSQL 9.1. This issue will be resolved by increasing "max_pred_locks_per_transaction" parameter value from default of 64 to 256 in PostgreSQL.conf file.

Thanks & Regards
Raghu Ram 

Re: Out of shared memory while creating a backup with pg_dump

From
Shreeyansh dba
Date:


On Wed, Sep 24, 2014 at 4:03 PM, Raghu Ram <raghuchennuru@gmail.com> wrote:
On Wed, Sep 24, 2014 at 2:35 PM, <christian.echerer@manroland-web.com> wrote:
Hi,

while creating a backup with pg_dump and parallel write accesses to the database following error occured:
2014-09-24 07:46:46 ... ERROR:  out of shared memory
2014-09-24 07:46:46 ... HINT:  You might need to increase max_pred_locks_per_transaction.



 
This a new parameter introduced in PostgreSQL 9.1. This issue will be resolved by increasing "max_pred_locks_per_transaction" parameter value from default of 64 to 256 in PostgreSQL.conf file.



Along with this above parameters also change from max_wal_sender=1 to max_wal_sender=5

Hope this will resolve your issue.

Thanks & Regards ,
Chetan Sharma

Thanks & Regards
Raghu Ram 


Re: Out of shared memory while creating a backup with pg_dump

From
Kevin Grittner
Date:
"christian.echerer@manroland-web.com" <christian.echerer@manroland-web.com> wrote:

> I didn´t understand the HINT:
> You might need to increase max_pred_locks_per_transaction.

That is a configuration setting in postgresql.conf which controls
how much shared memory is reserved at start-up to hold information
on the predicate locks required to make serializable transactions
work.

> What is the reason for the error, that the shared memory ran out,
> or that the number of predicate locks is too small?

The shared memory reserved at start-up for predicate lock
information was too small.

> If i increase max_pred_locks_per_transaction i get probably even
> faster an "out of shared memory" error.

No.

> What is the best way to fix the problem?

Increase max_pred_locks_per_transaction.  If you are using
serializable transactions this usually needs to be increased to see
optimal performance.  The optimal setting depends on data structure
and the nature of the workload.  Increases to 10 or 20 times the
default value are not unusual.

Be sure to review the "for optimal performance" suggestions at the
bottom of this section of the documentation:

http://www.postgresql.org/docs/9.3/interactive/transaction-iso.html#XACT-SERIALIZABLE

If you can declare some transactions which will not be modifying
data as READ ONLY you will probably reduce the number of predicate
locks required.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company