RE: pg_xlog unbounded growth - Mailing list pgsql-performance

From Alex Ignatov
Subject RE: pg_xlog unbounded growth
Date
Msg-id 026c01d3a036$c24f6e60$46ee4b20$@postgrespro.ru
Whole thread Raw
In response to pg_xlog unbounded growth  (Stefan Petrea <Stefan.Petrea@tangoe.com>)
List pgsql-performance
Have you tried 
archive_command='/bin/true' 
as Andreas wrote?

-----Original Message-----
From: Stefan Petrea [mailto:Stefan.Petrea@tangoe.com] 
Sent: Wednesday, January 24, 2018 2:48 PM
To: pgsql-performance@postgresql.org
Subject: pg_xlog unbounded growth

Hello,

This email is structured in sections as follows:

1 - Estimating the size of pg_xlog depending on postgresql.conf parameters
2 - Cleaning up pg_xlog using a watchdog script
3 - Mailing list survey of related bugs
4 - Thoughts

We're using PostgreSQL 9.6.6 on a Ubuntu 16.04.3 LTS.
During some database imports(using pg_restore), we're noticing fast and
unbounded growth of pg_xlog up to the point where the partition(280G in size
for us) that stores it fills up and PostgreSQL shuts down. The error seen in
the logs:

    2018-01-17 01:46:23.035 CST [41671] LOG:  database system was shut down
at 2018-01-16 15:49:26 CST
    2018-01-17 01:46:23.038 CST [41671] FATAL:  could not write to file
"pg_xlog/xlogtemp.41671": No space left on device
    2018-01-17 01:46:23.039 CST [41662] LOG:  startup process (PID 41671)
exited with exit code 1
    2018-01-17 01:46:23.039 CST [41662] LOG:  aborting startup due to
startup process failure
    2018-01-17 01:46:23.078 CST [41662] LOG:  database system is shut down

The config settings I thought were relevant are these ones (but I'm also
attaching the entire postgresql.conf if there are other ones that I missed):

    wal_level=replica
    archive_command='exit 0;'
    min_wal_size=2GB
    max_wal_size=500MB
    checkpoint_completion_target = 0.7
    wal_keep_segments = 8

So currently the pg_xlog is growing a lot, and there doesn't seem to be any
way to stop it.

There are some formulas I came across that allow one to compute the maximum
number of WAL allowed in pg_xlog as a function of the PostgreSQL config
parameters.

1.1) Method from 2012 found in [2]

The formula for the upper bound for WAL files in pg_xlog is 

(2 + checkpoint_completion_target) * checkpoint_segments + 1 which is ( (2 +
0.7) * (2048/16 * 1/3 ) ) + 1 ~ 116 WAL files

I used the 1/3 because of [6] the shift from checkpoint_segments to
max_wal_size in 9.5 , the relevant quote from the release notes being:

    If you previously adjusted checkpoint_segments, the following formula
    will give you an approximately equivalent setting:
    max_wal_size = (3 * checkpoint_segments) * 16MB

Another way of computing it, also according to [2] is the following
2 * checkpoint_segments + wal_keep_segments + 1 which is (2048/16) + 8 + 1 =
137  WAL files

So far we have two answers, in practice none of them check out, since
pg_xlog grows indefinitely.

1.2) Method from the PostgreSQL internals book 

The book [4] says the following:

    it could temporarily become up to "3 * checkpoint_segments + 1"

Ok, let's compute this too, it's 3 * (128/3) + 1 = 129 WAL files

This doesn't check out either.

1.3) On the mailing list [3] , I found similar formulas that were seen
previously.

1.4) The post at [5] says max_wal_size is as soft limit and also sets
wal_keep_segments = 0 in order to enforce keeping as little WAL as possible
around.  Would this work?

Does wal_keep_segments = 0 turn off WAL recycling? Frankly, I would rather
have WAL not be recycled/reused, and just deleted to keep pg_xlog below
expected size.

Another question is, does wal_level = replica affect the size of pg_xlog in
any way?  We have an archive_command that just exits with exit code 0, so I
don't see any reason for the pg_xlog files to not be cleaned up.

2) Cleaning up pg_xlog using a watchdog script

To get the import done I wrote a script that's actually inspired from a blog
post where the pg_xlog out of disk space problem is addressed [1].  It
periodically reads the last checkpoint's REDO WAL file, and deletes all WAL
in pg_xlog before that one. 

The intended usage is for this script to run alongside the imports in order
for pg_xlog to be cleaned up gradually and prevent the disk from filling up.

Unlike the blog post and probably slightly wrong is that I used
lexicographic ordering and not ordering by date. But I guess it worked
because the checks were frequent enough that no WAL ever got recycled. In
retrospect I should've used the date ordering.

Does this script have the same effect as checkpoint_completion_target=0 ?

At the end of the day, this script seems to have allowed the import we
needed to get done, but I acknowledge it was a stop-gap measure and not a
long-term solution, hence me posting on the mailing list to find a better
solution.

3) Mailing list survey of related bugs

On the mailing lists, in the past, there have been bugs around pg_xlog
growing out of control:

BUG 7902 [7] - Discusses a situation where WAL are produced faster than
checkpoints can be completed(written to disk), and therefore the WALs in
pg_xlog cannot be recycled/deleted.  The status of this bug report is
unclear. I have a feeling it's still open. Is that the case?

BUG 14340 [9] - A user(Sonu Gupta) is reporting pg_xlog unbounded growth and
is asked to do some checks and then directed to the pgsql-general mailing
list where he did not follow up.
I quote the checks that were suggested

    Check that your archive_command is functioning correctly, and that you
    don't have any inactive replication slots (select * from
    pg_replication_slots where not active).  Also check the server logs if
    both those things are okay.

I have done these checks, and the archive_command we have is returning zero
always.
And we do not have inactive replication slots.

BUG 10013 [12] - A user reports initdb to fill up the disk once he changes
BLCKSZ and/or XLOG_BLCKSZ to non-standard values. The bug seems to be open.

BUG 11989 [8] - A user reports a pg_xlog unbounded growth that concludes in
a disk outage.  No further replies after the bug report.

BUG 2104 [10] - A user reports a PostgreSQL not recycling pg_xlog files.
It's suggested that this might have happened because checkpoints were
failing so WAL segments could not be recycled.

BUG 7801 [11] - This is a bit offtopic for our problem(since we don't have
replication set up yet for the server with unbound pg_xlog growth), but
still an interesting read.

A slave falls too far behind a master which leads to increase of pg_xlog on
the slave. The user says making
checkpoint_completion_target=0 or, manually running CHECKPOINT on the slave
is immediately freeing up space on the slave's pg_xlog.

I also learned here that a CHECKPOINT occurs approximately every
checkpoint_completion_target * checkpoint_timeout. Is this correct?

Should I set checkpoint_completion_target=0? 

4) Thoughts

In the logs, there are lines like the following one:

    28 2018-01-17 02:34:39.407 CST [59922] HINT:  Consider increasing the
configuration parameter "max_wal_size".
    29 2018-01-17 02:35:02.513 CST [59922] LOG:  checkpoints are occurring
too frequently (23 seconds apart)

This looks very similar to BUG 7902 [7]. Is there any rule of thumb,
guideline or technique that can be used when checkpoints cannot be completed
fast enough ?

I'm not sure if this is a misconfiguration problem or a bug. Which one would
be more appropriate?

Thanks,
Stefan

[1]
https://www.endpoint.com/blog/2014/09/25/pgxlog-disk-space-problem-on-postgr
es
[2]
http://chirupgadmin.blogspot.ro/2012/02/wal-growth-calculation-pgxlog-direct
ory.html
[3]
https://www.postgresql.org/message-id/AANLkTi=e=oR54OuxAw88=dtV4wt0e5edMiGae
ZtBVcKO@mail.gmail.com
[4] http://www.interdb.jp/blog/pgsql/pg95walsegments/
[5]
http://liufuyang.github.io/2017/09/26/postgres-cannot-auto-clean-up-folder-p
g_xlog.html
[6] https://www.postgresql.org/docs/9.5/static/release-9-5.html#AEN128150
[7]
https://www.postgresql.org/message-id/flat/E1U91WW-0006rq-82%40wrigleys.post
gresql.org
[8]
https://www.postgresql.org/message-id/20141117190201.2478.7245@wrigleys.post
gresql.org
[9]
https://www.postgresql.org/message-id/flat/8a3a6780-18f6-d23a-2350-ac7ad335c
9e7%402ndquadrant.fr
[10]
https://www.postgresql.org/message-id/flat/20051209134337.94B0BF0BAB%40svr2.
postgresql.org
[11]
https://www.postgresql.org/message-id/flat/E1TsemH-0004dK-KN%40wrigleys.post
gresql.org
[12]
https://www.postgresql.org/message-id/flat/20140414014442.15385.74268%40wrig
leys.postgresql.org

Stefan Petrea
System Engineer, Network Engineering


stefan.petrea@tangoe.com
tangoe.com

This e-mail message, including any attachments, is for the sole use of the
intended recipient of this message, and may contain information that is
confidential or legally protected. If you are not the intended recipient or
have received this message in error, you are not authorized to copy,
distribute, or otherwise use this message or its attachments. Please notify
the sender immediately by return e-mail and permanently delete this message
and any attachments. Tangoe makes no warranty that this e-mail or its
attachments are error or virus free.




pgsql-performance by date:

Previous
From: Justin Pryzby
Date:
Subject: Re: effective_io_concurrency on EBS/gp2
Next
From: Rick Otten
Date:
Subject: Re: failing to use index on UNION of matviews (Re: postgresql 10.1wrong plan in when using partitions bug)