Re: pg_xlog on a hot_stanby slave - Mailing list pgsql-general
From | Xavier 12 |
---|---|
Subject | Re: pg_xlog on a hot_stanby slave |
Date | |
Msg-id | 55910A3C.6060404@gmail.com Whole thread Raw |
In response to | Re: pg_xlog on a hot_stanby slave (Sameer Kumar <sameer.kumar@ashnik.com>) |
List | pgsql-general |
On Thu, 18 Jun 2015 15:17 Xavier 12 <maniatux@gmail.com> wrote:
On 18/06/2015 04:00, Sameer Kumar wrote:
On Wed, 17 Jun 2015 15:24 Xavier 12 <maniatux@gmail.com> wrote:
On 17/06/2015 03:17, Sameer Kumar wrote:
On Tue, 16 Jun 2015 16:55 Xavier 12 <maniatux@gmail.com> wrote:Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64What's this parameter's value on Slave?
Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.
That was not my point. I was actually asking about wal_keep_segment. Nevermind I found that I had misses the info (found it below. Please see my response).
Besides I try to keep my master and standby config as same as possible(so my advise ia to not switchoff autovacuum). The parameters which are imeffective on slave anyways won't have an effect. Same goes for parameters on master.
This helps me when I swap roles or do a failover. I have less parameters to be worried about.
Okay
Can you check the pg_log for log files. They may have se info? I am sorry if you have already provided that info (after I finish I will try to look at your previous emails on this thread)
Nothing...
/var/log/postgresql/postgresql-2015-06-17_111131.log is empty (except old messages at the begining related to a configuration issue - which is now solved - after rebuilding the cluster yesterday).
/var/log/syslog has nothing but these :Jun 18 09:10:11 Bdd02 postgres[28400]: [2-1] 2015-06-18 09:10:11 CEST LOG: paquet de d?marrage incomplet
Jun 18 09:10:41 Bdd02 postgres[28523]: [2-1] 2015-06-18 09:10:41 CEST LOG: paquet de d?marrage incomplet
Jun 18 09:11:11 Bdd02 postgres[28557]: [2-1] 2015-06-18 09:11:11 CEST LOG: paquet de d?marrage incomplet
Jun 18 09:11:41 Bdd02 postgres[28652]: [2-1] 2015-06-18 09:11:41 CEST LOG: paquet de d?marrage incomplet
Jun 18 09:12:11 Bdd02 postgres[28752]: [2-1] 2015-06-18 09:12:11 CEST LOG: paquet de d?marrage incomplet
Jun 18 09:12:41 Bdd02 postgres[28862]: [2-1] 2015-06-18 09:12:41 CEST LOG: paquet de d?marrage incomplet
Jun 18 09:13:11 Bdd02 postgres[28891]: [2-1] 2015-06-18 09:13:11 CEST LOG: paquet de d?marrage incomplet
Jun 18 09:13:40 Bdd02 postgres[28987]: [2-1] 2015-06-18 09:13:40 CEST LOG: paquet de d?marrage incompletThese messages are related to Zabbix (psql port check).
You sure these are the only messages you have in the log files?
Also can you share the vacuum cost parameters in your environm
en
t?
I don't understand that part... is this in postgresql.conf ?
There are vacuum cost parameters in postgresql.conf
autovacuum = onSlave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
Sorry I missed this somehow earlier. Any reason why you think you need to retain 32 wal files on slave?
No but I get the feeling that the parameter is ignored by my slave... should I try another value ?
AFAIK you don't nees this parameter to set to > 0 unless you have cascaded replica pull wal from stand by or you have backup jobs running to backup from standby. Set it to 0 on the standby and check.
hot_standby = onSlave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'
Also consider setting hot_standby_feesback to on.
I will check that parameter in the documentation,
Thanks
How can I reduce the number of WAL files on the hot_stanby slave ?
Hello
What happens if archive_command is stuck ? This happens in my Primary server because the remote SSH has timeouts sometimes. So I can see rsync process during a long time in the primary.
Maybe this is the reason why WAL rotation is bad on the server.
Regards,
Xavier C.
pgsql-general by date: