Thread: amount of WAL logs larger than expected
Good afternoon
--
We run postgres 9.2.12 and run vacuum on a 540GB database.
I've attached a 'show all' for your reference. With CHECkPOINT_SEGMENTS set at 128
and CHECKPOINT_COMPLETION_TARGET set at 0.9 I wouldn't expect the number of logs in pg_xlog to creep much over 400. But when we run vacuum, the number can climb to over 5000 and threatens to blow out on space. Is there something else I should be looking at causing this unexpected number of logs?
Our server is also master for a slony1.2.2.3 slave and also to a hot standby server.
Any insight /recommendations welcome. Thank you
Mark Steben
Database Administrator
@utoRevenue | Autobase
CRM division of Dominion Dealer Solutions
95D Ashley Ave.
West Springfield, MA 01089
t: 413.327-3045
f: 413.383-9567
Database Administrator
@utoRevenue | Autobase
CRM division of Dominion Dealer Solutions
95D Ashley Ave.
West Springfield, MA 01089
t: 413.327-3045
f: 413.383-9567
www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
www.drivedominion.com
Attachment
On 05/09/2016 11:23 AM, Mark Steben wrote: > Good afternoon > > We run postgres 9.2.12 and run vacuum on a 540GB database. > > I've attached a 'show all' for your reference. With CHECkPOINT_SEGMENTS > set at 128 > and CHECKPOINT_COMPLETION_TARGET set at 0.9 I wouldn't expect the number > of logs in pg_xlog to creep much over 400. But when we run vacuum, the > number can climb to over 5000 and threatens to blow out on space. Is > there something else I should be looking at causing this unexpected > number of logs? > > Our server is also master for a slony1.2.2.3 slave and also to a hot > standby server. Are you archiving to your hot standby or streaming? If you are archiving, you may not be returning a proper success code and thus postgresql is keeping logs. Also, you are well behind production (9.2.16) and there are *significant* bug fixes in the release gaps between. JD -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them.
On Mon, May 9, 2016 at 11:23 AM, Mark Steben <mark.steben@drivedominion.com> wrote:
We run postgres 9.2.12 and run vacuum on a 540GB database.I've attached a 'show all' for your reference. With CHECkPOINT_SEGMENTS set at 128and CHECKPOINT_COMPLETION_TARGET set at 0.9 I wouldn't expect the number of logs in pg_xlog to creep much over 400. But when we run vacuum, the number can climb to over 5000 and threatens to blow out on space. Is there something else I should be looking at causing this unexpected number of logs?Our server is also master for a slony1.2.2.3 slave and also to a hot standby server.
Is the hot standby in a different network or over the WAN? Have you checked for bandwidth saturation? Also, have a look at the directory the hot standby is receiving WALs at, check if they most recent ones have a current timestamp. If the WALs that are arriving have much older timestamps than what is being generated on the primary, that could indicate slow transfer.
For example, I had an issue recently where 4k WALs built up on the primary during a large ETL process. It took a few hours to ship those (compressed) WALs over the WAN to the replica's data centre.
For example, I had an issue recently where 4k WALs built up on the primary during a large ETL process. It took a few hours to ship those (compressed) WALs over the WAN to the replica's data centre.